Unity Manual
Welcome to Unity.
Unity is made to empower you to create the best interactive entertainment or multimedia experience that you can. This manual is designed to help you learn how to use Unity, from basic to advanced techniques. It can be read from start to finish or used as a reference.
The manual is divided into different sections. The first section, User Guide, is an introduction to Unity's interface, asset workflow, and the basics of building a game. If you are new to Unity, you should start by reading the Unity Basics subsection.
The iOS Guide addresses iOS specific topics such as iOS-specific scripting API, optimizations, and general platform development questions.
The Android Guide addresses Android specific topics such as setting up the Android SDK and general development questions.
The next section, FAQ, is a collection of frequently asked questions about performing common tasks that require a few steps.
The last section, Advanced, addresses topics such as game optimization, shaders, file sizes, and deployment.
When you've finished reading, take a look at the Reference Manual and the Scripting Reference for further details about the different possibilities of constructing your games with Unity.
If you find that any question you have is not answered in this manual please ask on Unity Answers or Unity Forums. You will be able to find your answer there.
Happy reading,
The Unity team
The Unity Manual Guide contains some sections that apply only to certain platforms. Please select which platforms you want to see. Platform-specific information can always be seen by clicking on the disclosure triangles on each page.
Page last updated: 2012-11-16
User Guide
This section of the Manual is focused on the features and functions of Unity. It discusses the interface, core Unity building blocks, asset workflow, and basic gameplay creation. By the time you are done reading the user guide, you will have a solid understanding of how to use Unity to put together an interactive scene and publish it.
We recommend that new users begin by reading the Unity Basics section.
- Unity Basics
- Building Scenes
- Asset Import and Creation
- ゲームプレイの作成
- Instantiating Prefabs at runtime
- Input
- Transforms
- Physics
- Adding Random Gameplay Elements
- Particle Systems
- Mecanim Animation System
- Animations (Legacy)
- Navmesh and Pathfinding (Pro only)
- Sound
- Game Interface Elements
- Networked Multiplayer
Unity Basics
This section is your key to getting started with Unity. It will explain the Unity interface, menu items, using assets, creating scenes, and publishing builds.
When you are finished reading this section, you will understand how Unity works, how to use it effectively, and the steps to put a basic game together.
|
|
Learning the Interface
Unityエディタのインターフェースについての概要を見て、まずはそれに慣れていただきます。メインとなるエディタウィンドウ はいくつかの タブウィンドウ (ビュー(View) と呼ばれます)から構成されています。Unityには様々なタイプのビューがあり、それらは全て以下の項目にあるような特定の目的を持っています。

ProjectView40
In this view, you can access and manage the assets that belong to your project.

The left panel of the browser shows the folder structure of the project as a hierarchical list. When a folder is selected from the list by clicking, its contents will be shown in the panel to the right. The individual assets are shown as icons that indicate their type (script, material, sub-folder, etc). The icons can be resized using the slider at the bottom of the panel; they will be replaced by a hierarchical list view if the slider is moved to the extreme left. The space to the left of the slider shows the currently selected item, including a full path to the item if a search is being performed.
Above the project structure list is a Favorites section where you can keep frequently-used items for easy access. You can drag items from the project structure list to the Favourites and also save search queries there (see Searching below).
Just above the panel is a "breadcrumb trail" that shows the path to the folder currently being viewed. The separate elements of the trail can be clicked for easy navigation around the folder hierarchy. When searching, this bar changes to show the area being searched (the root Assets folder, the selected folder or the Asset Store) along with a count of free and paid assets available in the store, separated by a slash. There is an option in the General section of Unity's Preferences window to disable the display of Asset Store hit counts if they are not required.

Along the top edge of the window is the browser's toolbar.

Located at the left side of the toolbar, the menu lets you add new assets and sub-folders to the current folder. To its right are a set of tools to allow you to search the assets in your project.
The Window menu provides the option of switching to a one-column version of the project view, essentially just the hierarchical structure list without the icon view. The lock icon next to the menu enables you to "freeze" the current contents of the view (ie, stop them being changed by events elsewhere) in a similar manner to the inspector lock.
Searching
The browser has a powerful search facility that is especially useful for locating assets in large or unfamiliar projects. The basic search will filter assets according to the text typed in the search box

If you type more than one search term then the search is narrowed, so if you type coastal scene it will only find assets with both "coastal" and "scene" in the name (ie, terms are ANDed together).
To the right of the search bar are three buttons. The first allows you to further filter the assets found by the search according to their type.

Continuing to the right, the next button filters assets according to their Label (labels can be set for an asset in the Inspector). Since the number of labels can potentially be very large, the label menu has its own mini-search filter box.

Note that the filters work by adding an extra term in the search text. A term beginning with "t:" filters by the specified asset type, while "l:" filters by label. You can type these terms directly into the search box rather than use the menu if you know what you are looking for. You can search for more than one type or label at once. Adding several types will expand the search to include all specified types (ie, types will be ORed together). Adding multiple labels will narrow the search to items that have all the specified labels (ie, labels are ANDed).

The rightmost button saves the search by adding an item to the Favourites section of the asset list.
Searching the Asset Store
The Project Browser's search can also be applied to assets available from the Unity Asset Store. If you choose from the menu in the breadcrumb bar, all free and paid items from the store that match your query will be displayed. Searching by type and label works the same as for a Unity project. The search query words will be checked against the asset name first and then the package name, package label and package description in that order (so an item whose name contains the search terms will be ranked higher than one with the same terms in its package description).

If you select an item from the list, its details will be displayed in the inspector along with the option to purchase and/or download it. Some asset types have previews available in this section so you can, for example, play an audio clip or rotate a 3D model before buying. The inspector also gives the option of viewing the asset in the usual Asset Store window to see additional details.
Shortcuts
The following keyboard shortcuts are available when the browser view has focus. Note that some of them only work when the view is using the two-column layout (you can switch between the one- and two-column layouts using the panel menu in the very top right corner).
| F | Frame selection |
| Tab | Shift focus between first column and second column (Two columns) |
| Ctrl/Cmd + F | Focus search field |
| Ctrl/Cmd + A | Select all visible items in list |
| Ctrl/Cmd + D | Duplicate selected assets |
| Delete | Delete with dialog |
| Delete + Shift | Delete without dialog |
| Backspace + Cmd | Delete without dialogs (OSX) |
| Enter | Begin rename selected (OSX) |
| Cmd + down arrow | Open selected assets (OSX) |
| Cmd + up arrow | Jump to parent folder (OSX, Two columns) |
| F2 | Begin rename selected (Win) |
| Enter | Open selected assets (Win) |
| Backspace | Jump to parent folder (Win, Two columns) |
| Right arrow | Expand selected item (tree views and search results). If the item is already expanded, this will select its first child item. |
| Left arrow | Collapse selected item (tree views and search results). If the item is already collapsed, this will select its parent item. |
| Alt + right arrow | Expand item when showing assets as previews |
| Alt + left arrow | Collapse item when showing assets as previews |
Hierarchy

The Hierarchy contains every GameObject in the current Scene. Some of these are direct instances of asset files like 3D models, and others are instances of Prefabs, custom objects that will make up much of your game. You can select objects in the Hierarchy and drag one object onto another to make use of Parenting (see below). As objects are added and removed in the scene, they will appear and disappear from the Hierarchy as well.
Parenting
Unity uses a concept called Parenting. To make any GameObject the child of another, drag the desired child onto the desired parent in the Hierarchy. A child will inherit the movement and rotation of its parent. You can use a parent object's foldout arrow to show or hide its children as necessary.

Two unparented objects

One object parented to another
To learn more about Parenting, please review the Parenting section of the Transform Component page.
Page last updated: 2012-10-18Toolbar

The Toolbar consists of five basic controls. Each relate to different parts of the Editor.
Transform Tools -- used with the Scene View
Transform Gizmo Toggles -- affect the Scene View display
Play/Pause/Step Buttons -- used with the Game View
Layers Drop-down -- controls which objects are displayed in Scene View
Layout Drop-down -- controls arrangement of all ViewsSceneView40

The Scene View
The Scene View is your interactive sandbox. You will use the Scene View to select and position environments, the player, the camera, enemies, and all other GameObjects. Maneuvering and manipulating objects within the Scene View are some of the most important functions in Unity, so it's important to be able to do them quickly. To this end, Unity provides keystrokes for the most common operations.
Scene View Navigation
See Scene View Navigation for full details on navigating the scene view. Here's a brief overview of the essentials:
- Hold the right mouse button to enter Flythrough mode. This turns your mouse and keys (plus and for up and down) into quick first-person view navigation.
- Select any GameObject and press the key. This will center the Scene View and pivot point on the selection.
- Use the arrow keys to move around on the X/Z plane.
- Hold and click-drag to orbit the camera around the current pivot point.
- Hold and middle click-drag to drag the Scene View camera around.
- Hold and right click-drag to zoom the Scene View. This is the same as scrolling with your mouse wheel.
You might also find use in the Hand Tool (shortcut: ), especially if you are using a one-button mouse. With the Hand tool is selected,
Click-drag to drag the camera around.
Hold and click-drag to orbit the camera around the current pivot point.
Hold ( on Mac) and click-drag to zoom the camera.In the upper-right corner of the Scene View is the Scene Gizmo. This displays the Scene Camera's current orientation, and allows you to quickly modify the viewing angle.

Each of the coloured "arms" of the gizmo represents a geometric axis. You can click on any of the arms to set the camera to an orthographic (i.e., perspective-free) view looking along the corresponding axis. You can click on the text underneath the gizmo to switch between the normal perspective view and an isometric view. While in isometric mode, you can right-click drag to orbit, and Alt-click drag to pan.
Positioning GameObjects
See Positioning GameObjects for full details on positioning GameObjects in the scene. Here's a brief overview of the essentials:
When building your games, you'll place lots of different objects in your game world. To do this use the Transform Tools in the Toolbar to Translate, Rotate, and Scale individual GameObjects. Each has a corresponding Gizmo that appears around the selected GameObject in the Scene View. You can use the mouse and manipulate any Gizmo axis to alter the Transform Component of the GameObject, or you can type values directly into the number fields of the Transform Component in the Inspector.

Scene View Control Bar

The Scene View control bar lets you see the scene in various view modes - Textured, Wireframe, RGB, Overdraw, and many others. It will also enable you to see (and hear) in-game lighting, game elements, and sound in the Scene View. See View Modes for all the details.
Page last updated: 2012-10-20GameView40

Game Viewはゲーム内のCameraから見た絵をレンダリングしています。それはあなたが最終的に出力しようとしているゲーム画面です。あなたのゲームをしている時に、プレイヤーが今何を見ているかをコントロールするために あなたは1個以上のCameraを使う必要があります。Cameraの詳細については、Camera Component page をご覧ください
Play Mode

エディター上のツールバーのPlay Modeボタンを押すと、ゲームが再生されます。再生モード中にあなたが行った変更は一時的なもので、再生モードを終了するときにリセットされます。変更が一時的なものだということを忘れないように、エディタのUIは暗くなります。
Game Viewコントロールバー

Game Viewコントロールバーの最初のドロップダウンメニューがAspect ドロップダウンです。ここでは、Game Viewウィンドウのアスペクト比を異なる値に強制することができます。それはあなたのゲームが異なる縦横比のモニターでどのように見えるかテストするために利用できます。
さらに右に行くと トグルです. 有効になっているときに再生モードに入ると、Game Viewは素晴らしいフルスクリーンプレビューのためのあなたのエディタウィンドウの100%に自分自身を最大化します。
続けて右にあるのが ボタンです。 これは、レンダリングの統計情報 あなたのゲームのグラフィックス性能を監視するために非常に有用であるウィンドウを表示させるものです。( Optimizing Graphics Performance の詳細を参照してください)

最後のボタンは トグルです。有効になっている間、シーンビューに表示されるすべてのギズモがGame Viewで描画されます。これはGizmosクラス関数で使われる全てのギズモが含まれます。ギズモボタンを押すとポップアップメニューが出てきて、ゲームで使用される部品の様々な異なる種類のコンポーネントを表示/非表示が可能です。

各コンポーネント名の横にあるアイコンと、それに関連するギズモの設定値は以下のとおりです。アイコンの設定でまた別のポップアップメニューが表示されます。そのポップアップはプリセットアイコンやテクスチャで定義されたカスタムアイコンのどちらかを選ぶことができます。

Gizmo設定で選択した特定のコンポーネントに対してギズモ描画を無効にすることができます。
メニューの一番上の3D Gizmos設定は、ギズモのアイコンについてのものです。設定を有効にすると、アイコンがCameraに対してパースペクティブになり(すなわち、近くのオブジェクトのアイコンは遠くのオブジェクト用のものよりも大きくなります)、逆だと距離に関係なく同じ大きさになります。チェックボックスの横にあるスライダーを使用すると、アイコンのサイズを変化させることができます。これは目に見えるギズモがたくさんあるときに混乱を避けるために便利です。
Page last updated: 2012-11-26Inspector

Games in Unity are made up of multiple GameObjects that contain meshes, scripts, sounds, or other graphical elements like Lights. The Inspector displays detailed information about your currently selected GameObject, including all attached Components and their properties. Here, you modify the functionality of GameObjects in your scene. You can read more about the GameObject-Component relationship, as it is very important to understand.
Any property that is displayed in the Inspector can be directly modified. Even script variables can be changed without modifying the script itself. You can use the Inspector to change variables at runtime to experiment and find the magic gameplay for your game. In a script, if you define a public variable of an object type (like GameObject or Transform), you can drag and drop a GameObject or Prefab into the Inspector to make the assignment.

Click the question mark beside any Component name in the Inspector to load its Component Reference page. Please view the Component Reference for a complete and detailed guide to all of Unity's Components.

Add Components from the menu
You can click the tiny gear icon (or right-click the Component name) to bring up a context menu for the specific Component.

The Inspector will also show any Import Settings for a selected asset file.

Click to reimport your asset.

Use the drop-down to assign a rendering Layer to the GameObject. Use the drop-down to assign a Tag to this GameObject.
Prefabs
If you have a Prefab selected, some additional buttons will be available in the Inspector. For more information about Prefabs, please view the Prefab manual page.
Labels
Unity allows assets to be marked with Labels to make them easier to locate and categorise. The bottom item on the inspector is the Asset Labels panel.
Attach:InspectorLabelsPanelEmpty.png Δ
At the bottom right of this panel is a button titled with an ellipsis ("...") character. Clicking this button will bring up a menu of available labels
Attach:InspectorLabelsMenu.png Δ
You can select one or more items from the labels menu to mark the asset with those labels (they will also appear in the Labels panel). If you click a second time on one of the active labels, it will be removed from the asset.
Attach:InspectorAssetMenuWithLabels.png Δ
The menu also has a text box that you can use to specify a search filter for the labels in the menu. If you type a label name that does not yet exist and press return/enter, the new label will be added to the list and applied to the selected asset. If you remove a custom label from all assets in the project, it will disappear from the list.
Once you have applied labels to your assets, you can use them to refine searches in the Project Browser (see this page for further details). You can also access an asset's labels from an editor script using the AssetDatabase class.
Page last updated: 2012-11-26Other Views
The Views described on this page covers the basics of the interface in Unity. The other Views in Unity are described elsewhere on separate pages:
- The Console shows logs of messages, warnings, and errors.
- The Animation View can be used to animate objects in the scene.
- The Profiler can be used to investigate and find the performance bottle-necks in your game.
- The Asset Server View can be used to manage version control of the project using Unity's Asset Server.
- The Lightmapping View can be used to manage lightmaps using Unity's built-in lightmapping.
- The Occlusion Culling View can be used to manage Occlusion Culling for improved performance.
Customizing Your Workspace
ワークスペースのカスタマイズ
いずれかのビューのタブをいくつかの場所にクリック & ドラッグして、ビューの Layout をカスタマイズできます。 既存のウィンドウの Tab Area にタブをドロップすると、既存のタブの脇にタブが追加されます。 また、Dock Zone にタブをドロップすることで、新しいウィンドウにビューを追加します。

「ビューは、既存のウィンドウの両側または下部にドッキングできます」
タブは、メイン エディタ ウィンドウから切り離して、フローティング エディタ ウィンドウに配列できます。 フローティング ウィンドウは、メイン エディタ ウィンドウ同様、ビューとタブの配列を含むことができます。

「フローティング エディタ ウィンドウは、ツールバーがない点を除いては、メイン エディタ ウィンドウを同じです」
エディタ ウィンドウのレイアウトを作成している場合は、レイアウトを保存し、いつでも復旧できます。 これを行うには、レイアウト ドロップダウン (ツールバー上) を展開し、 を選択します。 新しいレイアウトに名前を付けて保存し、レイアウト ドロップダウンから選択するだけで復旧できます。

「完全にカスタムのレイアウト」
いつでも、ビューのタブを右クリックして、最大化などのオプションを表示したり、同じウィンドウに新しいタブを追加したりできます。

Asset Workflow
Here we'll explain the steps to use a single asset with Unity. These steps are general and are meant only as an overview for basic actions. For the example, we'll talk about using a 3D mesh.
Create Rough Asset
Use any supported 3D modeling package to create a rough version of your asset. Our example will use Maya. Work with the asset until you are ready to save. For a list of applications that are supported by Unity, please see this page.
Import
When you save your asset initially, you should save it normally to the Assets folder in your Project folder. When you open the Unity project, the asset will be detected and imported into the project. When you look in the Project View, you'll see the asset located there, right where you saved it. Please note that Unity uses the FBX exporter provided by your modeling package to convert your models to the FBX file format. You will need to have the FBX exporter of your modeling package available for Unity to use. Alternatively, you can directly export as FBX from your application and save in the Projects folder. For a list of applications that are supported by Unity, please see this page.
Import Settings
If you select the asset in the Project View the import settings for this asset will appear in the Inspector. The options that are displayed will change based on the type of asset that is selected.
Adding Asset to the Scene
Simply click and drag the mesh from the Project View to the Hierarchy or Scene View to add it to the Scene. When you drag a mesh to the scene, you are creating a GameObject that has a Mesh Renderer Component. If you are working with a texture or a sound file, you will have to add it to a GameObject that already exists in the Scene or Project.
Putting Different Assets Together
Here is a brief description of the relationships between the most common assets
- A Texture is applied to a Material
- A Material is applied to a GameObject (with a Mesh Renderer Component)
- An Animation is applied to a GameObject (with an Animation Component)
- A sound file is applied to a GameObject (with an Audio Source Component)
Creating a Prefab
Prefabs are a collection of GameObjects & Components that can be re-used in your scenes. Several identical objects can be created from a single Prefab, called instancing. Take trees for example. Creating a tree Prefab will allow you to instance several identical trees and place them in your scene. Because the trees are all linked to the Prefab, any changes that are made to the Prefab will automatically be applied to all tree instances. So if you want to change the mesh, material, or anything else, you just make the change once in the Prefab and all the other trees inherit the change. You can also make changes to an instance, and choose from the main menu. This can save you lots of time during setup and updating of assets.
When you have a GameObject that contains multiple Components and a hierarchy of child GameObjects, you can make a Prefab of the top-level GameObject (or root), and re-use the entire collection of GameObjects.
Think of a Prefab as a blueprint for a structure of GameObjects. All the Prefab clones are identical to the blueprint. Therefore, if the blueprint is updated, so are all the clones. There are different ways you can update the Prefab itself by changing one of its clones and applying those changes to the blueprint. To read more about using and updating Prefabs, please view the Prefabs page.
To actually create a Prefab from a GameObject in your scene, simply drag the GameObject from the scene into the project, and you should see the Game Object's name text turn blue. Name the new Prefab whatever you like. You have now created a re-usable prefab.
Updating Assets
You have imported, instantiated, and linked your asset to a Prefab. Now when you want to edit your source asset, just double-click it from the Project View. The appropriate application will launch, and you can make any changes you want. When you're done updating it, just Save it. Then, when you switch back to Unity, the update will be detected, and the asset will be re-imported. The asset's link to the Prefab will also be maintained. So the effect you will see is that your Prefab will update. That's all you have to know to update assets. Just open it and save!
Optional - Adding Labels to the Assets.
Is always a good idea to add labels to your assets if you want to keep organized all your assets, with this you can search for the labels associated to each asset in the search field in the project view or in the object selector.
Steps for adding a label to an asset:
- Select the asset you want to add the label to (From the project view).
- In the inspector click on the "Add Label" icon (
) if you dont have any Labels associated to that asset.
- If you have a label associated to an asset then just click where the labels are.
- Start writing your labels.
Notes:
- You can have more than one label for any asset.
- To separate/create labels, just press space or enter when writing asset label names.
Creating Scenes
Scenes には、ゲームのオブジェクトが含まれています。 シーンを使用して、メイン メニュー、個々のレベルなどを作成できます。 1 つのシーン ファイルは 1 つのレベルとして考えてください。 各シーンでは、環境や障害物、装飾を配置し、主にゲームを細かく設計および作成します。
プレハブのインスタンス化
最後の項で説明する方法で Prefab を作成します。 プレハブの詳細については、here を参照してください。 プレハブを作成したら、Instance と呼ばれるプレハブのコピーを素早く、簡単に作成できます。 プレハブのインスタンスを作成するには、Project View から、Hierarchy または Scene View にプレハブをドラッグします。 今度は、プレハブの 1 つだけのインスタンスを自由に配置し、微調整します。
コンポーネントとスクリプトの追加
プレハブや GameObject を強調表示すると、Components を使用して、機能をさらに追加できます。 各種コンポーネントの詳細については、Component Reference を参照してください。 Scripts は、コンポーネントの一種です。 コンポーネントを追加するには、GameObject を強調表示し、 メニューからコンポーネントを選択します。 コンポーネントが GameComponent の Inspector に表示されます。 スクリプトは、デフォルトで、 メニューにも含まれます。
コンポーネントの追加により、そのプレハブへの GameObject の接続が壊れる場合は、リンクを再構築するため、常にメニューから を使用できます。
GameObject の設置
GameObject をシーン内に置くと、Transform Tools を使用して、好きな時に配置できます。 さらに、インスペクタで「Transform」値を使用して、配置と回転を微調整できます。 GameObject の配置および回転の詳細については、Transform Component page ページを参照してください。
カメラの取り扱い
Cameras はゲームの眼です。 プレイ時にプレイヤーが目にするものはすべて、1 つまたは複数のカメラを通じて得られます。 その他の GameObject 同様、カメラは配置、回転、親子関係の構築を行うことができます。 カメラは、カメラ 今後が追加されたカメラ コンポーネントのある GameObject にすぎません。 そのため、通常の GameObject ができることの他、カメラ固有の機能も行うことができます。 新しいプロジェクト作成時に標準のアセットと共にインストールされる便利なカメラ スクリプトがあります。 これは、メニューから を選択すると使用できます。 理解すべきカメラに追加される側面が幾つかあります。 カメラについては、Camera Component reference を参照してください。
ライト
非常に稀なケースを除いて、シーンに必ず Lights を追加する必要があります。 ライトには 3 種類あり、そのすべての動作がそれぞれ若干異なります。 重要なことは、ライトは、ゲームにムードや雰囲気を加えることができます。 照明が変わると、ゲームのムードも完全に代わり、ライトの活用は学ぶべき重要な主題になります。 カメラについては、Camera Component reference を参照してください。
Page last updated: 2012-11-09Publishing Builds
ゲーム作成中、ゲームを作成し、スタンドアロンまたはウェブ プレイヤーとして、エディタの外部で実行する際、常時どのように見えるかを確認したい場合があるでしょう。 本項では、Build Settings へのアクセス方法およびゲームの各種ビルドの作成方法について説明します。
は、ビルド設定ウィンドウにアクセスするためのメニュー項目です。 ゲーム作成時に含まれるシーンの編集可能なリストがポップアップ表示されます。

ビルド設定ウィンドウ
最初にプロジェクトでこのウィンドウを開くと、空に見えます。 このリストが空のままで、ゲームを作成すると、現在開いているシーンのみがビルドに含まれます。 1 つのシーン ファイルだけでテスト プレイヤーを素早く作成したい場合は、空のシーン リストでプレイヤーを作成します。
マルチシーンビルドでこのリストにシーン ファイルを追加するのは簡単です。 次の 2 種類の追加方法があります。 まず、 ボタンをクリックします。 リストに現在開いているシーンが表示されます。 次に、シーン ファイルを追加するには、 Project View からリストにファイルをドラッグします。
この時点までに、シーンによってインデックス値が異なります。 Scene 0 は、ゲーム ビルド時にロードサれる最初のシーンです。 新しいシーンをロードするには、スクリプト内で Application.LoadLevel() を使用します。
複数のシーン ファイルを追加し、最配置したい場合、希望の順序になるまで、上記のリストまたは下記のその他でシーンをクリックし、ドラッグするだけです。
リストからシーンを削除したい場合は、クリックしてシーンを強調表示し、 を押します。 リストからシーンが消え、ビルドに含まれなくなります。
ビルドをパブリッシュする準備ができたら、Platform を選択し、Unity のロゴがプラットフォームの隣にあるか確認します。ない場合は、 ボタンをクリックして、Unity に作成したいプラットフォームを知らせます。 最後に、 ボタンを押します。 標準の保存ダイアログを使用して、ゲームに名前と位置を選択できるようになります。 をクリックすると、Unity がゲームをすぐに作成します。 これは非常にシンプルです。 作成したゲームをどこに保存すべきか分からない場合は、プロジェクトのルート フォルダに保存することを検討してください。 Assets フォルダにはビルドを保存できません。
スタンドアロン プレイヤーで Debug build チェックボックスを有効にすると、Profiler 機能が有効になります。 また、プレイヤーはデバッグ記号で作成されるため、サード パーティプロファイリングまたはデバッギング ツールを使用できます。 Enabling the Development Build checkbox on a player will enable Profiler functionality and also make the Autoconnect Profiler and Script Debugging options available.

Desktop
ウェブプレイヤーのストリーミング
ストリーミング ウェブ プレイヤーにより、シーン 0 がロードを終了するとすぐに、ウェブ プレイヤー ゲームがプレイを開始できるようになります。 10 のレベルのあるゲームの場合、レベル 1 のプレイを開始する前に、プレイヤーに待機させ、レベル 2 〜 10 にすべてのアセットをダウンロードサせる意味はありません。ストリーミング ウェブ プレイヤーのパブリッシュ時に、ダウンロード剃る必要のあるアセットが、表示される Scene ファイルの順に並べ替えられます。シーン 0 内のすべてのアセットのダウンロードが終了すると、ウェブ プレイヤーはプレイを開始します。
簡単に言うと、ストリーミング ウェブ プレイヤーにより、プレイヤーはこれまでよりも速くゲームをプレイできます。
唯一心配すべきことは、ロードする前に、ロードしたい次のレベルのストリーミングが終了したか確認することです。
通常、非ストリーム プレイヤーでは、次のコードを使用して、レベルをロードします。
Application.LoadLevel("levelName");
通常、ストリーミング ウェブ プレイヤーでは、最初にレベルのストリーミングが終了したかを確認する必要があります。 これは、CanStreamedLevelBeLoaded() 関数を使用して行われます。 これは次のように機能します。
var levelToLoad = 1;
function LoadNewLevel () {
if (Application.CanStreamedLevelBeLoaded (levelToLoad)) {
Application.LoadLevel (levelToLoad);
}
}
ローディング バーやその他の表示に対して、プレイヤーにレベルのストリーミングの進捗を表示したい場合、GetStreamProgressForLevel() にアクセスして進捗を読み取ることができます。
オフライン ウェブ プレイヤーの配備
Offline Deployment オプションがウェブ プレイヤーに対して有効になると、作成中、UnityObject.js ファイル (プレイヤーとホスト ページとを結びつけるのに使用されます) が、 プレイヤーと平行して配置されます。 これにより、ネットワークに接続していなくても、プレイヤーはローカルのスクリプト ファイルで作業できます。通常、UnityObject.js は、最新バージョンが使用できる使用できるようにするため、Unity のウェブサーバーからダウンロードされます。
スタンドアロン プレイヤーの作成
Unity では、Windows および Mac 向けのスタンドアロン アプリケーションを作成できます (両方のアーキテクチャで実行する Intel、PowerPC または Universal)。 ビルド設定ダイアログでビルド対象を選択し、Buildボタンを押すのは簡単です。 スタンドアロン プレイヤー作成時に、結果生じるファイルは、ビルド対象によって変わります。 Windows では、アプリケーションのすべてのリソースを含む Data フォルダと共に、実行可能なファイル (.exe) が作成されます。 Mac では、リソースの他、アプリケーションの実行に必要なファイルを含む、app bundle が作成されます。
Mac でのスタンドアロンの配布は、app bundle を配布するだけです (すべてがそこに含まれています)。 Windows では、.exe ファイルと他がそれを実行するための Data フォルダの両方を提供する必要があります。 これは、以下のように考えてください。 その他の人々は、ゲームを実行するために、Unity が作成するファイルとして、コンピュータ上に同じファイルを持つ必要があります。
ビルド処理内
作成処理は、指定した任意の場所に作成したゲーム アプリケーションの空のコピーを置きます。 次に、これがビルド設定内のシーン リストを処理し、エディタで一度に 1 つ開いて、最適化し、アプリケーション パッケージの統合します。 また、含まれているシーンが必要とするすべてのアセットを計算し、アプリケーション パッケージ内に個々のファイルを格納します。
- EditorOnlyというタグのついたシーン内の GameObject は、パブリッシュされたビルドには含まれません。 これは、最終的なゲームに含める必要のない、スクリプトのデバッグに便利です。
- 新しいレベルのロード時、前のレベルの全オブジェクトが破棄されます。 これを防ぐには、破棄したくないオブジェクトで、 DontDestroyOnLoad() を使用します。 これは、レベルのロード時に音楽の再生を続ける場合や、ゲームの状態や進捗を維持するゲーム コントローラ スクリプトに通常使用されます。
- 新しいレベルのロード終了後に、 OnLevelWasLoaded() というメッセージがすべてのアクティブなゲーム オブジェクトに送信されました。
- 複数のシーン (例えば、メイン メニュー、ハイスコア画面、実際のゲーム レベルなど) で最適にゲームを作成する方法の詳細については、Scripting Tutorial.pdf を参照してください。

iOS
iOS ビルド処理内
iPhone/iPad アプリケーション ビルド処理には、次の 2 つの手順があります。
- XCode プロジェクトが必要なすべてのライブラリ、プレコンパイルされた .NET コードおよび直列化されたアセットと共に生成されます。
- XCode プロジェクトが作成され、実機に配備されます。
ビルド設定ダイアログで作成を押すと、最初の手順のみ達成されます。 ディタで作成と実行を押すと、両方の手順が行われます。 プロジェクト保存ダイアログで、ユーザーが既存のフォルダを選択すると、警告が表示されます。 現在、次の 2 つの XCode プロジェクト生成モードを選択できます。
- replace - 対象フォルダからのファイルはすべて削除され、新しい内容が作成されます。
- append - Data、Librariesおよびプロジェクト のルート フォルダが一掃され、新たに生成された内容で満たされます。 XCode プロジェクト ファイルは、最新の Unity のプロジェクト 変更に応じて更新されます。 XCode プロジェクトのClassesサブフォルダは、カスタムのネイティブ コードを置く安全な場所として考えることができますが、通常のバックアップを作成することをお勧めします。 Append モードは、同じ Unity iOS で生成された既存の XCode プロジェクトに対してのみサポートされています。
Cmd+B を押すと、自動作成および実行処理が呼び出され、最新の使用されたフォルダがビルド対象とみなされます。 この場合、appendモードはデフォルトとみなされます。

Android
Android アプリケーション ビルド処理には、次の 2 つの手順があります。
- アプリケーション パッケージ (.apk ファイル) が必要なすべてのライブラリ、プレコンパイルされた .NET コードおよび直列化されたアセットと共に生成されます。
- アプリケーション パッケージが作成され、実機に配備されます。
ビルド設定ダイアログで作成を押すと、最初の手順のみ達成されます。 ディタで作成と実行を押すと、両方の手順が行われます。 Cmd+B を押すと、自動作成および実行処理が呼び出され、最新の使用されたフォルダがビルド対象とみなされます。
最初に Android プロジェクトを作成する際に、Unity によって Android SDK を置くように求められます。これは、機器に Android アプリケーションを作成し、インストールするのに必要になります。 この設定は、 で後で変更できます。

Android にアプリケーションを作成する際、機器設定で、USB DebuggingおよびAllow mock locationsチェックボックスにチェックが入っていることを確認してください。

Android SDK/platform-tools フォルダにある adb devices コマンドを実行することで、OS は機器を確認できます。
これは、Windows と Mac プロジェクト ファイルの両方に機能します。

Unity は、アプリケーション アーカイブ (.apk ファイル) を作成し、接続された機器にインストールします。 アプリケーションが、iPhone 上などで自動起動できない場合があるため、画面のロックを解除する必要があります。新たにインストールされたアプリケーションがメニューに表示される場合も稀にあります。
テクスチャ圧縮
Build Settings 下に、 オプションがあります。 デフォルトでは、Unity は、個々のテクスチャ形式の無効のないテクスチャに対しては、[ Main.android-GettingStarted | ETC1/RGBA16 ]] テクスチャ形式を使用します (Texture 2D / Per-Platform Overrides 参照)。
特定のハードウェア アーキテクチャを対象に、アプリケーション アーカイブ (.apk ファイル) を作成したい場合、 オプションを使用して、このデフォルトの動作を無効にできます。 圧縮されていないテクスチャはそのままにされます。つまり、圧縮フォーマットのテクスチャだけ オプションで選択されたフォーマットを使います。
アプリケーションが選択したテクスチャ圧縮をサポートしている機器でのみに配備サせたい場合、Unity は、AndroidManifest tを編集して、選択した特定の形式に一致するタグを含めることができます。 これにより、Android Market のフィルタリング機構が、適切なグラフィック ハードウェアを搭載した機器にのみアプリケーションを提供します。
プレロード
パブリッシュされたビルドは、シーンのロード時にシーン内のすべてのアセットを自動的にプレロードします。 このルールの例外は、シーン 0 です。これは、最初のシーンが通常、できるだけ速く表示したいであろうスプラッシュ画面であるためです。
すべての内容がプレロードされるようにするため、Application.LoadLevel(1)を呼び出す空のシーンを作成できます。 ビルド設定で、この空のシーンのインデックスを 0 にします。続くレベルはすべてプレロードされます。
ゲーム作成の準備ができました。
これまで、Unity のインターフェースの使い方、アセットの使い方、シーンの作成法およびビルドのパブリッシュ法を学んできました。 理想のゲーム作成を邪魔するものはありません。 途中更に多くのことを学ぶことになりますが、ここではそのお手伝いをします。
Unity 自体の使用の詳細については、continue reading the manual するか、Tutorials に従ってください。
ゲームの動作の基本であるコンポーネントの詳細については、Component Reference を参照してください。
スクリプティングの詳細については、Scripting Reference を参照してください。
アート アセットの作成の詳細については、本マニュアルの Assets section を参照してください。
Unity ユーザーや開発者のコミュニティに参加したい場合は、Unity Forums にアクセスしてください。 ここでは、質問やプロジェクトの共有、チームの作成などを行えます。あなたの作成した素晴らしいゲームを見たいので、ぜひ 1 度はフォーラムに参加してください。
Page last updated: 2012-11-13Tutorials
これらのチュートリアルに従いながら、Unity を使用してください。 実際に体験することで、実際のプロジェクトを作成することができます。 新規ユーザーの場合は、GUI の基本とスクリプティングの基本チュートリアルに その後は、いずれかのチュートリアルに従ってください。 チュートリアルはすべて PDF 形式なので、印刷して、Unity と並行で、指示を仰いだり、参照したりできます。
注意: これらのチュートリアルは、Unity のデスクトップ版向けであるため、Android や iOS 機器 (iPhone/iPad) では機能しません。
また、Unity 向けのプレゼンテーション、アーティクル、アセットまたは拡張などのその他のリソースをお探しの場合は、here から。
また、 Unity3D Tutorial's Home Page をクリックすることで、チュートリアルの最新の追加内容を確認できます。
Page last updated: 2012-11-09Unity Hotkeys
このページでは、Unityのデフォルトショートカットキーの概要を説明します。PDFで一覧を確認したい場合は、Windows and MacOSXらダウンロードできます。。キーの一部に”CTRL/CMD"とある場合は、WindowsではばControlキーをMacOSXならばCommandキーを使用します。
| Tool | |
| 入力キー | コマンド |
| Q | パン |
| W | 移動 |
| E | 回転 |
| R | 拡大/縮小 |
| Z | ギズモの表示位置を切り替え |
| X | ギズモの回転設定を切り替え |
| V | 頂点スナッピング |
| CTRL/CMD+マウス左ボタン | スナッピング |
| GameObject | |
| CTRL/CMD+SHIFT+N | game objectを生成する |
| CTRL/CMD+ALT+F | ビューに移動(Move to view) |
| CTRL/CMD+SHIFT+F | ビューに位置合わせ(Aligh with view) |
| Window | |
| CTRL/CMD+1 | Scene |
| CTRL/CMD+2 | Game |
| CTRL/CMD+3 | Inspector |
| CTRL/CMD+4 | Hierarchy |
| CTRL/CMD+5 | Project |
| CTRL/CMD+6 | Animation |
| CTRL/CMD+7 | Profiler |
| CTRL/CMD+9 | Asset store |
| CTRL/CMD+0 | Animation |
| CTRL/CMD+SHIFT+C | Console |
| Edit | |
| CTRL/CMD+Z | 取り消す(Undo) |
| CTRL+Y (Windows only) | やり直す(Redo) |
| CMD+SHIFT+Z (Mac only) | やり直す(Redo) |
| CTRL/CMD+X | カット |
| CTRL/CMD+C | コピー |
| CTRL/CMD+V | 貼り付け |
| CTRL/CMD+D | 複製(Duplicate) |
| SHIFT+Del | 削除 |
| F | 選択項目をフレームの中央に(Frame(center)selection) |
| CTRL/CMD+F | 検索 |
| CTRL/CMD+A | 全ての項目を選択 |
| Selection | |
| CTRL/CMD+SHIFT+1 | 1から選択状況をロード(Load Selection 1) |
| CTRL/CMD+SHIFT+2 | 2から選択状況をロード(Load Selection 2) |
| CTRL/CMD+SHIFT+3 | 3から選択状況をロード(Load Selection 3) |
| CTRL/CMD+SHIFT+4 | 4から選択状況をロード(Load Selection 4) |
| CTRL/CMD+SHIFT+5 | 5から選択状況をロード(Load Selection 5) |
| CTRL/CMD+SHIFT+6 | 6から選択状況をロード(Load Selection 6) |
| CTRL/CMD+SHIFT+7 | 7から選択状況をロード(Load Selection 7) |
| CTRL/CMD+SHIFT+8 | 8から選択状況をロード(Load Selection 8) |
| CTRL/CMD+SHIFT+9 | 9から選択状況をロード(Load Selection 9) |
| CTRL/CMD+ALT+1 | 1に選択状況をセーブ(Save Selection 1) |
| CTRL/CMD+ALT+2 | 2に選択状況をセーブ(Save Selection 2) |
| CTRL/CMD+ALT+3 | 3に選択状況をセーブ(Save Selection 3) |
| CTRL/CMD+ALT+4 | 4に選択状況をセーブ(Save Selection 4) |
| CTRL/CMD+ALT+5 | 5に選択状況をセーブ(Save Selection 5) |
| CTRL/CMD+ALT+6 | 6に選択状況をセーブ(Save Selection 6) |
| CTRL/CMD+ALT+7 | 7に選択状況をセーブ(Save Selection 7) |
| CTRL/CMD+ALT+8 | 8に選択状況をセーブ(Save Selection 8) |
| CTRL/CMD+ALT+9 | 9に選択状況をセーブ(Save Selection 9) |
| Assets | |
| CTRL/CMD+R | リフレッシュ |
Preferences
Unity provides a number of preference panels to allow you to customise the behaviour of the editor.
General

| Auto Refresh | Should the editor update assets automatically as they change? |
| Always Show Project Wizard | Should the project wizard be shown at startup? (By default, it is shown only when the alt key is held down during launch) |
| Compress Assets On Import | Should assets be compressed automatically during import? |
| OSX Color Picker | Should the native OSX color picker be used instead of Unity's own? |
| Editor Analytics | Can the editor send information back to Unity automatically? |
| Show Asset Store search hits | Should the number of free/paid assets from the store be shown in the Project Browser? |
| Verify Saving Assets | Should Unity verify which assets to save individually on quitting? |
| Skin (Pro Only) | Which color scheme should Unity use for the editor? Pro users have the option of dark grey in addition to the default light grey. |
| Graphics Device | This is set to Automatic on the Mac but has options for Direct3D 9, Direct3D 11 and OpenGL on Windows. |
External Tools

| External Script Editor | Which application should Unity use to open script files? |
| Editor Attaching | Should Unity allow debugging to be controlled from the external script editor? |
| Image Application | Which application should Unity use to open image files? |
| Asset Server Diff Tool | Which application should Unity use to resolve file differences with the asset server? |
| Android SDK Location | Where in the filesystem is the Android SDK folder located? |
| iOS Xcode 4.x support | Should support for Xcode 4.x be enabled for iOS build targets? |
Colors

This panel allows you to choose the colors that Unity uses when displaying various user interface elements.
Keys

This panel allows you to set the keystrokes that activate the various commands in Unity.
Cache Server

| Use Cache Server | Should the cache server be enabled? |
| IP Address | IP address of the cache server, if enabled |
Building Scenes
This section will explain the core elements you will work with to build scenes for complete games.
Page last updated: 2007-11-16GameObjects
GameObjects は、Unity で最も重要なオブジェクトです。 GameObject とは何か、どのように使用できるかを理解することが非常に重要です。 このページでは、そのすべてについて説明します。
GameObject とは何ですか?
ゲーム内のすべてのオブジェクトはすべて本質的に、GameObject になります。 しかし、GameObject は自身では何もしません。 キャラクターや環境、特殊効果になるには、特殊なプロパティが必要です。 これらのオブジェクトはすべて動作が異なります。 すべてのオブジェクトが GameOnject の場合、相互作用パワーアップオブジェクトをスタティック ルームとどのように区別するでしょうか? これらの GameObject を互いに差別化するものは何でしょうか?
この質問の回答は、GameObject は容器です。 これらは空の箱で、ライトマップされた孤島または物理特性駆動の車両を構成する各種ピースを格納できる空の箱です。 そのため、GameObject を本当に理解するには、これらのピースを理解する必要があります。これらのピースは Components と呼ばれます。 作成したいオブジェクトの種類に応じて、異なる組み合わせのコンポーネントを GameObject に追加します。 GameObject は、空の鍋、コンポーネントはゲームプレイというレシピを構成する各種材料と考えてください。 スクリプトを使用して、自身でコンポーネントを作成することもできます。
本項のページでは、GameObject、コンポーネントおよびスクリプト コンポーネントについて記載されています。
Page last updated: 2012-11-13The GameObject-Component Relationship
As described previously in GameObjects, a GameObject contains Components. We'll explore this relationship by discussing a GameObject and its most common Component -- the Transform Component. With any Unity Scene open, create a new GameObject (using on Windows or on Mac), select it and take a look at the Inspector.

The Inspector of an Empty GameObject
Notice that an empty GameObject still contains a Name, a Tag, and a Layer. Every GameObject also contains a Transform Component.
The Transform Component
It is impossible to create a GameObject in Unity without a Transform Component. The Transform Component is one of the most important Components, since all of the GameObject's Transform properties are enabled by its use of this Component. It defines the GameObject's position, rotation, and scale in the game world/Scene View. If a GameObject did not have a Transform Component, it would be nothing more than some information in the computer's memory. It effectively would not exist in the world.
The Transform Component also enables a concept called Parenting, which is utilized through the Unity Editor and is a critical part of working with GameObjects. To learn more about the Transform Component and Parenting, read the Transform Component Reference page.
Other Components
The Transform Component is critical to all GameObjects, so each GameObject has one. But GameObjects can contain other Components as well.

The Main Camera, added to each scene by default
Looking at the Main Camera GameObject, you can see that it contains a different collection of Components. Specifically, a Camera Component, a GUILayer, a Flare Layer, and an Audio Listener. All of these Components provide additional functionality to the GameObject. Without them, there would be nothing rendering the graphics of the game for the person playing! Rigidbodies, Colliders, Particles, and Audio are all different Components (or combinations of Components) that can be added to any given GameObject.
Page last updated: 2012-08-13Using Components40
Components are the nuts & bolts of objects and behaviors in a game. They are the functional pieces of every GameObject. If you don't yet understand the relationship between Components and GameObjects, read the GameObjects page before going any further.
A GameObject is a container for many different Components. By default, all GameObjects automatically have a Transform Component. This is because the Transform dictates where the GameObject is located, and how it is rotated and scaled. Without a Transform Component, the GameObject wouldn't have a location in the world. Try creating an empty GameObject now as an example. Click the menu item. Select the new GameObject, and look at the Inspector.

Even empty GameObjects have a Transform Component
Remember that you can always use the Inspector to see which Components are attached to the selected GameObject. As Components are added and removed, the Inspector will always show you which ones are currently attached. You will use the Inspector to change all the properties of any Component (including scripts)
Adding Components
You can add Components to the selected GameObject through the Components menu. We'll try this now by adding a Rigidbody to the empty GameObject we just created. Select it and choose from the menu. When you do, you will see the Rigidbody's properties appear in the Inspector. If you press Play while the empty GameObject is still selected, you might get a little surprise. Try it and notice how the Rigidbody has added functionality to the otherwise empty GameObject. (The y-component of the GameObject starts to decrease. This is because the physics engine in Unity is causing the GameObject to fall under gravity.)

An empty GameObject with a Rigidbody Component attached
Another option is to use the Component Browser, which can be activated with the button in the object's inspector.

The browser lets you navigate the components conveniently by category and also has a search box that you can use to locate components by name.
You can attach any number or combination of Components to a single GameObject. Some Components work best in combination with others. For example, the Rigidbody works with any Collider. The Rigidbody controls the Transform through the NVIDIA PhysX physics engine, and the Collider allows the Rigidbody to collide and interact with other Colliders.
If you want to know more about using a particular Component, you can read about any of them in the Component Reference. You can also access the reference page for a Component from Unity by clicking on the small ? on the Component's header in the Inspector.
Editing Components
One of the great aspects of Components is flexibility. When you attach a Component to a GameObject, there are different values or Properties in the Component that can be adjusted in the editor while building a game, or by scripts when running the game. There are two main types of Properties: Values and References.
Look at the image below. It is an empty GameObject with an Audio Source Component. All the values of the Audio Source in the Inspector are the default values.

This Component contains a single Reference property, and seven Value properties. Audio Clip is the Reference property. When this Audio Source begins playing, it will attempt to play the audio file that is referenced in the Audio Clip property. If no reference is made, an error will occur because there is no audio to be played. You must reference the file within the Inspector. This is as easy as dragging an audio file from the Project View onto the Reference Property or using the Object Selector.

Now a sound effect file is referenced in the Audio Clip property
Components can include references to any other type of Component, GameObjects, or Assets. You can read more about assigning references on the Assigning References page.
The remaining properties on the Audio Clip are all Value properties. These can be adjusted directly in the Inspector. The Value properties on the Audio Clip are all toggles, numeric values, drop-down fields, but value properties can also be text strings, colors, curves, and other types. You can read more about these and about editing value properties on the Editing Value Properties page.
Copying and pasting Component settings
The context menu for a Component has items for copying and pasting its settings.

The copied values can be pasted to an existing component using the menu item. Alternatively, you can use to create a new Component with those values.
Testing out Properties
While your game is in Play Mode, you are free to change properties in any GameObject's Inspector. For example, you might want to experiment with different heights of jumping. If you create a Jump Height property in a script, you can enter Play Mode, change the value, and press the jump button to see what happens. Then without exiting Play Mode you can change it again and see the results within seconds. When you exit Play Mode, your properties will revert to their pre-Play Mode values, so you don't lose any work. This workflow gives you incredible power to experiment, adjust, and refine your gameplay without investing a lot of time in iteration cycles. Try it out with any property in Play Mode. We think you'll be impressed.
Changing the order of Components
The order in which components are listed in the Inspector doesn't matter in most cases. However, there are some Components, such as Image Effects where the ordering is significant. The context menu has and commands to let you reorder Components as necessary.

Removing Components
If you want to remove a Component, option- or right-click on its header in the Inspector, and choose . Or you can left-click the options icon next to the ? on the Component header. All the property values will be lost and this cannot be undone, so be completely sure you want to remove the Component before you do.
Page last updated: 2012-09-12The Component-Script Relationship
script 作成後、GameObject にこのスクリプトを追加すると、コンポーネントのような、GameObject のインスペクタでに表示されます。 これは、スクリプトが保存時に、コンポーネントになるためです。スクリプトは一種のコンポーネントです。 技術的な意味では、スクリプトはコンポーネントの一種としてコンパイルし、Unity エンジンによってその他のコンポーネント同様に微調整されます。 そのため、基本的に、スクリプトは、自身で作成するコンポーネントです。 インスペクタで表示するそのメンバーを定義すると、記述したすべての機能を実行します。
スクリプトの作成および使用の詳細については、Scripting ページを参照してください。
Page last updated: 2012-11-09DeactivatingGameObjects
A GameObject can be temporarily removed from the scene by marking it as inactive. This can be done using its activeSelf property from a script or with the activation checkbox in the inspector

A GameObject's activation checkbox
Effect of deactivating a parent GameObject
When a parent object is deactivated, the deactivation also overrides the activeSelf setting on all its child objects, so the whole hierarchy from the parent down is made inactive. Note that this does not change the value of the activeSelf property on the child objects, so they will return to their original state once the parent is reactivated. This means that you can't determine whether or not a child object is currently active in the scene by reading its activeSelf property. Instead, you should use the activeInHierarchy property, which takes the overriding effect of the parent into account.
This overriding behaviour was introduced in Unity 4.0. In earlier versions, there was a function called SetActiveRecursively which could be used to activate or deactivate the children of a given parent object. However, this function worked differently in that the activation setting of each child object was changed - the whole hierarchy could be switched off and on but the child objects had no way to "remember" the state they were originally in. To avoid breaking legacy code, SetActiveRecursively has been kept in the API for 4.0 but its use is not recommended and it may be removed in the future. In the unusual case where you actually want the children's activeSelf settings to be changed, you can use code like the following:-
// JavaScript
function DeactivateChildren(g: GameObject, a: boolean) {
g.activeSelf = a;
for (var child: Transform in g.transform) {
DeactivateChildren(child.gameObject, a);
}
}
// C#
void DeactivateChildren(GameObject g, bool a) {
g.activeSelf = a;
foreach (Transform child in g.transform) {
DeactivateChildren(child.gameObject, a);
}
}
Page last updated: 2012-10-05
Using The Inspector
The Inspector is used to view and edit Properties of many different types.
Games in Unity are made up of multiple GameObjects that contain meshes, scripts, sounds, or other graphical elements like Lights. When you select a GameObject in the Hierarchy or Scene View, the Inspector will show and let you modify the Properties of that GameObject and all the Components and Materials on it. The same will happen if you select a Prefab in the Project View. This way you modify the functionality of GameObjects in your game. You can read more about the GameObject-Component relationship, as it is very important to understand.

Inspector shows the properties of a GameObject and the Components and Materials on it.
When you create a script yourself, which works as a custom Component type, the member variables of that script are also exposed as Properties that can be edited directly in the Inspector when that script component has been added to a GameObject. This way script variables can be changed without modifying the script itself.
Furthermore, the Inspector is used for showing import options of assets such as textures, 3D models, and fonts when selected. Some scene and project-wide settings are also viewed in the Inspector, such as all the Settings Managers.
Any property that is displayed in the Inspector can be directly modified. There are two main types of Properties: Values and References.
Page last updated: 2010-09-14Editing Value Properties40
Value properties do not reference anything and they can be edited right on the spot. Typical value properties are numbers, toggles, strings, and selection popups, but they can also be colors, vectors, curves, and other types.

Value properties on the inspector can be numbers, checkboxes, strings...
Many value properties have a text field and can be adjusted simply by clicking on them, entering a value using the keyboard, and pressing to save the value.
- You can also put your mouse next to a numeric property, left-click and drag it to scroll values quickly
- Some numeric properties also have a slider that can be used to visually tweak the value.
Some Value Properties open up a small popup dialog that can be used to edit the value.
Color Picker
Properties of the Color type will open up the Color Picker. (On Mac OS X this color picker can be changed to the native OS color picker by enabling under .)
The Color Picker reference in the inspector is represented by:

Color Picker reference in the inspector.
And opens the Color Picker just by clicking on it:

Color Picker descriptions.
Use the Eyedropper Tool when you want to find a value just by putting your mouse over the color you want to grab.
RGB / HSV Selector lets you switch your values from Red, Green, Blue to Hue, Saturation and Value (Strength) of your color.
Finally, the transparency of the Color selected can be controlled by the Alpha Channel value.
Curve Editor
Properties of the AnimationCurve type will open up the Curve Editor. The Curve Editor lets you edit a curve or choose from one of the presets. For more information on editing curves, see the guide on Editing Curves.
The type is called AnimationCurve for legacy reasons, but it can be used to define any custom curve function. The function can then be evaluated at runtime from a script.
An AnimationCurve property is shown in the inspector as a small preview:

A preview of an AnimationCurve in the Inspector.
Clicking on it opens the Curve Editor:

The Curve Editor is for editing AnimationCurves.
Wrapping Mode Lets you select between Ping Pong, Clamp and Loop for the Control Keys in your curve.
The Presets lets you modify your curve to default outlines the curves can have.
Gradient editor
In graphics and animation, it is often useful to be able to blend one colour gradually into another, over space or time. A gradient is a visual representation of a colour progression, which simply shows the main colours (which are called stops) and all the intermediate shades between them. In Unity, gradients have their own special value editor, shown below.

The upward-pointing arrows along the bottom of the gradient bar denote the stops. You can select a stop by clicking on it; its value will be shown in the Color box which will open the standard colour picker when clicked. A new stop can be created by clicking just underneath the gradient bar. The position of any of the stops can be changed simply by clicking and dragging and a stop can be removed with .
The downward-pointing arrows above the gradient bar are also stops but they correspond to the alpha (transparency) of the gradient at that point. By default, there are two stops set to 100% alpha (ie, fully opaque) but any number of stops can be added and edited in much the same way as the colour stops.
Page last updated: 2012-08-13Editing Reference Properties
リファレンス プロパティは、ゲーム オブジェクトやコンポーネント、アセットなどのその他のオブジェクトを参照するプロパティです。 リファレンス スロットは、このリファレンスに使用するオブジェクトの種類を表示します。

オーディオ クリッププロパティ スロットは、オーディオ クリップのオブジェクトへのリファレンスを受け付けることを示します。

現在、オーディオ クリップファイルは、オーディオ クリッププロパティで参照されています。
このタイプのリファレンスは、特にスクリプティング使用時には非常に高速で、強力です。 スクリプトとプロパティの使用については、チュートリアル ページのスクリプティング チュートリアルを参照してください。
オブジェクト リファレンスは、ドラッグ & ドロップか、Object Picker を使用するかのいずれかで、リファレンス プロパティに割り当てることができます。
ドラッグ & ドロップ
シーン ビュー、ヒエラルキー、またはプロジェクト ビューで対象のオブジェクトを選択して、リファレンス プロパティのスロットにドラッグするだけで、ドラッグ & ドロップを使用できます。
リファレンス プロパティが特定のコンポーネント タイプ (トランスフォームなど) を受け付ける場合、ゲーム オブジェクトまたはプレハブが正しいタイプのコンポーネントを含んでいれば、そのゲーム オブジェクトまたはプレハブをリファレンス プロパティにドラッグすると、うまくいきます。 ドラッグしたのはゲーム オブジェクトまたはプレハブですが、このプロパティが対象のコンポーネントを参照します。
オブジェクトをリファレンス プロパティにドラッグし、そのオブジェクトが正しいタイプでない場合、または正しいコンポーネントを含まない場合、そのオブジェクトをリファレンス プロパティに割り当てることはできません。
Object Picker
リファレンス スロットの隣にある小さい対象アイコンをクリックして、Object Picker を開くことができます。

エディタから Object Picker へのリファレンス。
Object Picker は、インスペクタ内のオブジェクトを割り当てる小さなウィンドウで、その前に、オブジェクトをプレビュー表示したり、使用可能にすることができます。
Object Picker は、非常に使いやすいのですが、気を付けなければならない点が幾つかあります。 以下に説明します。

Object Picker の構造
- Search: ピッカーにオブジェクトがたくさんある場合、Search フィールドを使用してフィルタリングできます。 この検索フィールドで、Labels を使用してオブジェクトを検索できます。
- View Selector: シーンとアセット内のオブジェクト間の検索のベースを切り替えます。
- Preview Size: この水平スクロール バーで、プレビュー ウィンドウ内でのプレビュー オブジェクトのサイズを大きくしたり、小さくしたりできます。 これにより、いつでもプレビュー ウィンドウでのオブジェクトを数を増やしたり、減らしたりできます。
- Preview Window: Search フィールドでフィルタリングされた Scene/Assets folder 内のすべてのオブジェクトが表示されます。
- Object Info: 現在選択されているオブジェクトに関する情報を表示します。 このフィールドの内容は、表示されているオブジェクトの種類によって決まるため、例えば、メッシュを選択すると、頂点と三角形の数、UV があるか、スキンされているかが表示されます。 しかし、オーディオ ファイルを選択すると、オーディオのビット レートや長さなどの情報が表示されます。
- Object Preview: これも表示されているオブジェクトの種類によって決まります。 メッシュを選択すると、メッシュの外観を表示しますが、スクリプト ファイルを選択すると、ファイルのアイコンのみを表示します。
Object Picker は、プロジェクト内のアセットで機能しますが、これには、ビデオや歌、地形、GUI スキン、スクリプティング ファイル、またはメッシュなどが該当します。頻繁に使うツールです。
ヒント
- アセットで Labels を使用すると、Object Picker の検索フィールドを使用して、アセットをより簡単に検索できます。
- オブジェクトの説明を確認したくない場合は、プレビュー ウィンドウの下中央にあるスライダーを下方に動かします。
- オブジェクトの詳細なプレビューを確認したい場合は、プレビュー ウィンドウの下中央にあるスライダーをドラッグすることで、オブジェクトを拡大できます。
Multi-Object Editing
Starting in Unity 3.5 you can select multiple objects of the same type and edit them simultaneously in the Inspector. Any changed properties will be applied to all of the selected objects. This is a big time saver if you want to make the same change to many objects.
When selecting multiple objects, a component is only shown in the Inspector if that component exists on all the selected objects. If it only exists on some of them, a small note will appear at the bottom of the Inspector saying that components that are only on some of the selected objects cannot be multi-edited.
Property Values
When multiple objects are selected, each property shown in the Inspector represents that property on each of the selected objects. If the value of the property is the same for all the objects, the value will be shown as normal, just like when editing a single object. If the value of the property is not the same for all the selected objects, no value is shown and a dash or similar is shown instead, indicating that the values are different.

Regardless of whether a value is shown or a dash, the property value can be edited as usual and the changed value is applied to all the selected objects. If the values are different and a dash is thus shown, it's also possible to right-click on the label of the property. This brings up a menu that lets you choose from which of the objects to inherit the value.

Multi-Editing Prefab or Model Instances
Prefabs can be multi-edited just like Game Objects in the scene. Instances of prefabs or of models can also be multi-edited; however certain restrictions apply: When editing a single prefab or model instance, any property that is different from the prefab or model will appear in bold, and when right clicking there's an option to revert the property to the value it has in the prefab or model. Furthermore, the Game Object has options to apply or revert all changes. None of these things are available when multi-object editing. Properties cannot be reverted or applied; nor will they appear in bold if different from the prefab or model. To remind you of this, the Inspector will show a note with Instance Management Disabled where the , , and buttons would normally appear.

Non-Supported Objects
A few object types do not support multi-object editing. When you select multiple objects simultaneously, these objects will show a small note saying "Multi-object editing not supported".
If you have made a custom editor for one of your own scripts, it will also show this message if it doesn't support multi-object editing. See the script reference for the Editor class to learn how to implement support for multi-object editing for your own custom editors.
Page last updated: 2012-01-23Inspector Options
インスペクタ ロックおよびインスペクタ デバッグ モードは、ワークフローで役に立つ便利な 2 つのオプションです。
ロック
ロックにより、インスペクタの GameObject へのフォーカスを維持しつつ、その他の GameObject を選択できます。 インスペクタのロックを切り替えるには、インスペクタ上の「lock/unlock」 (
) アイコンをクリックするか、タブ メニューを開き、 を選択します。

「タブ メニューからインスペクタをロックします。」
複数のインスペクタを開くことができ、例えば、インスペクタの 1 つを GameObject にロックし、他方の GameObject をロックされていない状態にして、どの GameObject が選択されているかを表示できます。
デバッグ
デバッグ モードにより、インスペクタで、通常は表示されないコンポーネントのプライベート変数を検査できます。 デバッグ モードに変更するには、タブ メニューを開いて、 を選択します。
デバッグ モードで、すべてのコンポーネントは、通常モードで一部のコンポーネントが使用するカスタムのインターフェースではなく、デフォルトのインターフェースを使用して表示されます。 例えば、デバッグ モードでのトランスフォーム コンポーネントは、通常モードで表示されるオイラー角ではなく、回転のそのままの Quaternion 値を表示します。 また、デバッグ モードを使用して、自身のスクリプト コンポーネントのプライベート変数の値を検査できます。

「インスペクタのデバッグ モードにより、スクリプトおよびその他のコンポーネントでプライベート変数を検査できます。」
デバッグ モードは、インスペクタごとで、デバッグ モードでは 1 つのインスペクタを持つことができますが、別のインスペクタを持つことはできません。
Page last updated: 2012-11-09Using The Scene View
The Scene View is your interactive sandbox. You will use the Scene View to select and position environments, the player, the camera, enemies, and all other GameObjects. Maneuvering and manipulating objects within the Scene View are some of the most important functions in Unity, so it's important to be able to do them quickly.
Page last updated: 2010-09-06Scene View Navigation
The Scene View has a set of navigation controls to help you move around quickly and efficiently.
Arrow Movement
You can use the to move around the scene as though "walking" through it. The up and down arrows move the camera forward and backward in the direction it is facing. The left and right arrows pan the view sideways. Hold down the key with an arrow to move faster.
Focusing
If you select a GameObject in the hierarchy, then move the mouse over the scene view and press the key, the view will move so as to center on the object. This feature is referred to as frame selection.
Move, Orbit and Zoom
Moving, orbiting and zooming are key operations in Scene View navigation, so Unity provides several alternative ways to perform them for maximum convenience.
Using the Hand Tool
When the hand tool is selected (shortcut: ), the following mouse controls are available:
Move: Click-drag to drag the camera around.
Orbit: Hold and click-drag to orbit the camera around the current pivot point.
Zoom: Hold ( on Mac) and click-drag to zoom the camera.Holding down will increase the rate of movement and zooming.
Shortcuts Without Using the Hand Tool
For extra efficiency, all of these controls can also be used regardless of which transform tool is selected. The most convenient controls depend on which mouse or track-pad you are using:
| Action | 3-button mouse | 2-button mouse or track-pad | Mac with only one mouse button or track-pad |
|---|---|---|---|
| Move | Hold and middle click-drag. | Hold and click-drag. | Hold and click-drag. |
| Orbit | Hold and click-drag. | Hold and click-drag. | Hold and click-drag. |
| Zoom | Hold and right click-drag or use scroll-wheel. | Hold and right click-drag. | Hold and click-drag or use two-finger swipe. |
Flythrough Mode
The Flythrough mode lets you navigate the Scene View by flying around in first person similar to how you would navigate in many games.
- Click and hold the right mouse button.
- Now you can move the view around using the mouse and use the keys to move left/right forward/backward and the and keys to move up and down.
- Holding down will make you move faster.
Flythrough mode is designed for Perspective Mode. In Isometric Mode, holding down the right mouse button and moving the mouse will orbit the camera instead.
Scene Gizmo
In the upper-right corner of the Scene View is the Scene Gizmo. This displays the Scene View Camera's current orientation, and allows you to quickly modify the viewing angle.

You can click on any of the arms to snap the Scene View Camera to that direction. Click the middle of the Scene Gizmo, or the text below it, to toggle between Isometric Mode and Perspective Mode. You can also always shift-click the middle of the Scene Gizmo to get a "nice" perspective view with an angle that is looking at the scene from the side and slightly from above.

Perspective mode.

Isometric mode. Objects do not get smaller with distance here!
Mac Trackpad Gestures
On a Mac with a trackpad, you can drag with two fingers to zoom the view.
You can also use three fingers to simulate the effect of clicking the arms of the Scene Gizmo: drag up, left, right or down to snap the Scene View Camera to the corresponding direction. In OS X 10.7 "Lion" you may have to change your trackpad settings in order to enable this feature:
- Open System Preferences and then Trackpad (or type trackpad into Spotlight).
- Click into the "More Gestures" option.
- Click the first option labelled "Swipe between pages" and then either set it to "Swipe left or right with three fingers" or "Swipe with two or three fingers".
Positioning GameObjects
ゲーム作成時に、ゲームの世界に各種オブジェクトを置くことになるでしょう。
フォーカシング
操作前にシーン ビュー カメラのフォーカスをオブジェクトに合わせると便利な場合があります。 GameObject を選択して、 キーを押します。 これにより、選択範囲でシーン ビューおよびピボット点が中央に配置されます。 これはフレーム選択とも呼ばれます。
移動、回転および縮小拡大
ツールバーのトランスフォーム ツールを使用して、個々の GameObject を移動、回転、縮小拡大します。 それぞれに、シーンビュー内で選択した GameObject 周辺に表示される対応するギズモがあります。 マウスを使用し、ギズモ軸を操作して、GameObject の Transform コンポーネントを変更したり、またはインスペクタのトランスフォーム コンポーネントの数字フィールドに直接値を入力できます。Each of the three transform modes can be selected with a hotkey - W for Translate, E for Rotate and R for Scale.

- クリックして、ギズモの中心でドラッグすると、オブジェクトをすべての軸で一度に操作することができます。
- At the center of the Translate gizmo, there are three small squares that can be used to drag the object within a single plane (ie, two axes can be moved at once while the third is kept still).
- 3 ボタン マウスの場合、中央ボタンをクリックして、最後に調整した軸 (黄色に変わります) を直接クリックせずに、調整できます。
- スケーリング ツール使用時は、不均一なスケール (1、2、1 など) により子オブジェクトに対して、異常なスケーリングが発生する場合があります。
- GameObject の変形の詳細については、Transform Component ページを参照してください。
Gizmo Display Toggles
Gizmo Display Toggles は、トランスフォーム ギズモの位置を定義します。

Gizmo Display Toggles
- 位置:
- は、オブジェクトのレンダリング バウンドの中心にギズモを配置します。
- は、メッシュの実際のピボット点にギズモを配置します。
- 回転:
- オブジェクトの回転に対して、ギズモの回転を維持します。
- は、ギズモを世界空間の方向に固定します。
Unit Snapping
移動ツールを使用して、ギズモ軸をドラッグしたまま、 キー (Mac の場合、) を押したままにすると、Snap Settings で定義された増分にスナップできます。
メニュー を使用して、単位スナッピングに使用される単位距離を変更できます。

シーン ビューの単位スナッピング設定
Surface Snapping
移動ツールを使用して、中心でドラッグしたまま、 と キー (Mac の場合、) を押したままにすると、Collider の交差部にオブジェクトをスナップできます。 これによって、オブジェクトを驚くほど素早く正確に配置できます。
Look-At Rotation
回転ツールを使用して、 と キー (Mac の場合、) を押したままにすると、Collider の表面の点に対して、オブジェクトを回転できます。 これにより、それぞれに対するオブジェクトの方向をシンプルできます。
Vertex Snapping
Vertex Snapping 機能を使用して、世界をより簡単に組み立てることができます。 この機能は非常にシンプルですが、強力な Unity のツールです。 これにより、所定のメッシュから頂点を取り、マウスで、その頂点を選択したほかのメッシュからの頂点と同じ位置に配置できます。
この機能を使えば、世界を実に素早く組み立てることができます。 例えば、レース ゲームで高精度の道路を置いたり、メッシュの頂点にパワアップアイテムを追加したりできます。

Vertex Snapping での道路の組み立て
Unityでの Vertex Snapping の使用は非常に簡単です。 次の手順に従うだけで使用できます。
- 操作したいメッシュを選択し、トランスフォーム ツールがアクティブになるのを確認します。
- キーを押したままにすると、Vertex Snapping モードが起動します。
- ピボット点として使用したいメッシュ上の頂点にマウスのカーソルを合わせます。
- カーソルを希望の頂点に合わせたら、左ボタンを押したままにし、メッシュを別のメッシュ上の別の頂点の隣にドラッグします。
- 結果に満足したら、マウスのボタンと キーを離します。
- で、この機能のオンオフを切り替えることができます。
- 頂点から頂点、頂点から表面、ピボットから頂点へとスナップできます。
Vertex snapping の使用法を映したビデオは、here にあります。
Page last updated: 2012-11-13View Modes
The Scene View control bar lets you choose various options for viewing the scene and also control whether lighting and audio are enabled. These controls only affect the scene view during development and have no effect on the built game.

Draw Mode
The first drop-down menu selects which Draw Mode will be used to depict the scene.

Draw Mode drop-down
- Textured: show surfaces with their textures visible.
- Wireframe: draw meshes with a wireframe representation.
- Tex-Wire: show meshes textured and with wireframes overlaid.
- Render Paths: show the rendering path for each object using a color code: Green indicates deferred lighting, yellow indicates forward rendering and red indicates vertex lit.
- Lightmap Resolution: overlay a checkered grid on the scene to show the resolution of the lightmaps.
Render Mode
The next drop-down along selects which of four Render Modes will be used to render the scene.

Render Mode drop-down
- RGB: render the scene with objects normally colored.
- Alpha: render colors with alpha.
- Overdraw: render objects as transparent "silhouettes". The transparent colors accumulate, making it easy to spot places where one object is drawn over another.
- Mipmaps: show ideal texture sizes using a color code: red indicates that the texture is larger than necessary (at the current distance and resolution); blue indicates that the texture could be larger. Naturally, ideal texture sizes depend on the resolution at which the game will run and how close the camera can get to particular surfaces.
Scene Lighting, Game Overlay, and Audition Mode
To the right of the dropdown menus are three buttons which control other aspects of the scene representation.

The first button determines whether the view will be lit using a default scheme or with the lights that have actually been added to the scene. The default scheme is used initially but this will change automatically when the first light is added. The second button controls whether skyboxes and GUI elements will be rendered in the scene view and also shows and hides the placement grid. The third button switches audio sources in the scene on and off.
Page last updated: 2011-11-10Gizmo and Icon Visibility
Gizmos and icons have a few display options which can be used to reduce clutter and improve the visual clarity of the scene during development.
The Icon Selector
Using the Icon Selector, you can easily set custom icons for GameObjects and scripts that will be used both in the Scene View and the Inspector. To change the icon for a GameObject, simply click on its icon in the Inspector. The icons of script assets can be changed in a similar way. In the Icon Selector is a special kind of icon called a Label Icon. This type of icon will show up in the Scene View as a text label using the name of the GameObject. Icons for built-in Components cannot be changed.
Selecting an icon for a GameObject
Selecting an icon for a script
Showing and Hiding Icons and Gizmos
The visibility of an individual component's gizmos depends on whether the component is expanded or collapsed in the inspector (ie, collapsed components are invisible). However, you can use the Gizmos dropdown to expand or collapse every component of a given type at once. This is a useful way to reduce visual clutter when there are a large number of gizmos and icons in the scene.
To show the state of the current gizmo and icon, click on in the control bar of the Scene or Game View. The toggles here are used to set which icons and gizmos are visible.
Note that the scripts that show up in the section are those that either have a custom icon or have an or function implemented.
The Gizmos dropdown, displaying the visibility state of icons and gizmos
The slider can be used to adjust the size used for icon display in the scene. If the slider is placed at the extreme right, the icon will always be drawn at its natural size. Otherwise, the icon will be scaled according to its distance from the scene view camera (although there is an upper limit on the display size in order that screen clutter be avoided).
Page last updated: 2012-11-15Searching
When working with large complex scenes it can be useful to search for specific objects. By using the Search feature in Unity, you can filter out only the object or group of objects that you want to see. You can search assets by their name, by Component type, and in some cases by asset Labels. You can specify the search mode by choosing from the Search drop-down menu.
Scene Search
When a scene is loaded in the Editor, you can see the objects in both the Scene View and the Hierarchy. The specific assets are shared in both places, so if you type in a search term (eg, "elevator"), you'll see the the filter applied both visually in the Scene View and a more typical manner in the Hierarchy. There is also no difference between typing the search term into the search field in the Scene View or the Hierachy -- the filter takes effect in both views in either case.

Scene View and Hierarchy with no search applied.

Scene View and Hierarchy with active filtering of search term.
When a search term filter is active, the Hierarchy doesn't show hierarchical relationships between GameObjects, but you can select any GameObject, and it's hierarchical path in the scene will be shown at the bottom of the Hierarchy.

Click on a GameObject in the filtered list to see its hierarchical path.
When you want to clear the search filter, just click the small cross in the search field.
In the Scene search you can search either by Name or by Type. Click on the small magnifying glass in the search field to open the search drop-down menu and choose the search mode.

Search by Name, Type, or All.
Project Search
The same fundamentals apply to searching of assets in the Project View -- just type in your search term and you'll see all the relevant assets appear in the filter.
In the Project search you can search by Name or by Type as in the Scene search, and additionally you can search by Label. Click on the small magnifying glass in the search field to open the search drop-down menu and choose the search mode.

Search by Name, Type, Label, or All.
Object Picker Search
When assigning an object via the Object Picker, you can also enter a search term search to filter the objects you want to see.
Page last updated: 2011-11-10Prefabs
A Prefab is a type of asset -- a reusable GameObject stored in Project View. Prefabs can be inserted into any number of scenes, multiple times per scene. When you add a Prefab to a scene, you create an instance of it. All Prefab instances are linked to the original Prefab and are essentially clones of it. No matter how many instances exist in your project, when you make any changes to the Prefab you will see the change applied to all instances.
Creating Prefabs
In order to create a Prefab, simply drag a GameObject that you've created in the scene into the Project View. The GameObject's name will turn blue to show that it is a Prefab. You can rename your new Prefab.
After you have performed these steps, the GameObject and all its children have been copied into the Prefab data. The Prefab can now be re-used in multiple instances. The original GameObject in the Hierarchy has now become an instance of the Prefab.
Prefab Instances
To create a Prefab instance in the current scene, drag the Prefab from the Project View into the Scene or Hierarchy View. This instance is linked to the Prefab, as displayed by the blue text used for their name in the Hierarchy View.

Three of these GameObjects are linked to Prefabs. One of them is not.
- If you have selected a Prefab instance, and want to make a change that affects all instances, you can click the button in the Inspector to select the source Prefab.
- Information about instantiating prefabs from scripts is in the Instantiating Prefabs page.
Inheritance
Inheritance means that whenever the source Prefab changes, those changes are applied to all linked GameObjects. For example, if you add a new script to a Prefab, all of the linked GameObjects will instantly contain the script as well. However, it is possible to change the properties of a single instance while keeping the link intact. Simply change any property of a prefab instance, and watch as the variable name becomes bold. The variable is now overridden. All overridden properties will not be affected by changes in the source Prefab.
This allows you to modify Prefab instances to make them unique from their source Prefabs without breaking the Prefab link.

A linked GameObject with no overrides enabled.

A linked GameObject with several (bold) overrides enabled.
- If you want to update the source Prefab and all instances with the new overridden values, you can click the button in the Inspector.
- Note that the root's position and rotation will not be applied, as that affects the instances absolute position and would put all instances in the same place. However position and rotation from any children or ancestors of the root will be applied as they are computed relative to the root's transform.
- If you want to discard all overrides on a particular instance, you can click the button.
Imported Prefabs
When you place a mesh asset into your Assets folder, Unity automatically imports the file and generates something that looks similar to a Prefab out of the mesh. This is not actually a Prefab, it is simply the asset file itself. Instancing and working with assets introduces some limitations that are not present when working with normal Prefabs.

Notice the asset icon is a bit different from the Prefab icons
The asset is instantiated in the scene as a GameObject, linked to the source asset instead of a normal Prefab. Components can be added and removed from this GameObject as normal. However, you cannot apply any changes to the asset itself since this would add data to the asset file itself! If you're creating something you want to re-use, you should make the asset instance into a Prefab following the steps listed above under "Creating Prefabs".
- When you have selected an instance of an asset, the button in the Inspector is replaced with an button. Clicking this button will launch the editing application for your asset (e.g. Maya or Max).
Lights
Lights は、すべてのシーンの重要な部分です。 メッシュおよびテクスチャは、シーンの形状と見た目を定義し、ライトは 3D 環境の色やムードを定義します。 各シーンで複数のライトを扱うことになるでしょう。 これらを連携させるには、少しの実践が必要ですが、結果は驚くほど素晴らしいものになるでしょう。

簡単な、2 つのライトのある設定
ライトは、 メニューからシーンに追加できます。 ライトを追加すると、その他の GameObject のように操作できます。 また、 を使用して、選択した GameObject にライト コンポーネントを追加できます。
Inspector のライト コンポーネントには多くの各種オプションがあります。

インスペクタ内のライト コンポーネント プロパティ
ライトのColorを変更するだけで、シーンのムード全体を変えることができます。

明るい太陽のようなライト

暗い中世のライト

不気味な夜のライト
このように作成したライトは、realtime ライトになります。そのライティングは、ゲーム実行中、フレームごとに計算されます。 ライトが変わらないと分かっている場合、Lightmapping を使用して、ゲームをより高速かつ、見た目も向上できます。
レンダリング パス
Unity は、各種レンダリング パスをサポートしています。これらのパスは、主にライトやシャドウに影響するため、ゲームの要件に基づいた正しいレンダリング パスの選択により、プロジェクトのパフォーマンスを向上できます。 レンダリング パスの詳細については、 Rendering paths section を参照してください。
詳細
ライトの使用に関する詳細については、Reference Manual の Lights page を参照してください。
Page last updated: 2012-11-13Cameras
Just as cameras are used in films to display the story to the audience, Cameras in Unity are used to display the game world to the player. You will always have at least one camera in a scene, but you can have more than one. Multiple cameras can give you a two-player splitscreen or create advanced custom effects. You can animate cameras, or control them with physics. Practically anything you can imagine is possible with cameras, and you can use typical or unique cameras to fit your game's style.
The remaining text is from the Camera Component reference page.
カメラ
Cameras は、世界を切り取り、プレイヤーに表示する装置です。 カメラをカスタマイズし、操作することで、本当に自分なりの表現を行うことができます。 シーン内では、カメラを好きなだけ使用できます。 レンダリングの順序や、スクリーン上の位置、又は、スクリーンの一部だけを表示するように設定することも可能です。

「Unity の柔軟なカメラ オブジェクト」
プロパティ
| Clear Flags | 画面のどの部分をクリアするかを決定します。 複数のカメラを使用して、異なるゲーム要素を描画する際に便利です。 |
| Background | ビュー内のすべての要素が描画され、スカイボックスがない場合には、残りの画面に適用される色。 |
| Culling Mask | カメラによってレンダリングされるオブジェクトのレイヤーを含めたり、取り除いたりします。 インスペクタのオブジェクトにレイヤーを割り当てます。 |
| Projection | 景色をシミュレートするカメラの機能を切り替えます。 |
| Perspective | カメラがそのままの景色でオブジェクトをレンダリングします。 |
| Orthographic | カメラが景色感なしで、オブジェクトを均一にレンダリングします。 |
| Size (Orthographic を選択しない場合) | Orthographic に設定した場合のカメラのビューポイントのサイズ。 |
| Field of view | ローカルな Y 軸に沿って測定された (単位: °)、カメラのビュー角度の幅。 |
| Clipping Planes | レンダリングを開始および停止するカメラからの距離。 |
| Near | 描画が行われるカメラに対して最も近い点。 |
| Far | 描画が行われるカメラに対して最も遠い点。 |
| Normalized View Port Rect | 画面座標内で画面上でこのカメラ ビューが描画される場所を示す 4 つ値 (値 0-1)。 |
| X | カメラ ビューが描画される開始の水平位置。 |
| Y | カメラ ビューが描画される開始の垂直位置。 |
| W (Width) | 画面上のカメラの出力の幅。 |
| H (Height) | 画面上のカメラの出力の高さ。 |
| Depth | 描画順でのカメラの位置。 大きい値のカメラが、小さい値のカメラの上に描画されます。 |
| Rendering Path | カメラが使用するレンダリング方法を定義するオプション。 |
| Use Player Settings | このカメラは、プレイヤー設定でいずれの Rendering Path されても使用します。 |
| Vertex Lit | このカメラでレンダリングされたオブジェクトはすべて、Vertex-Lit オブジェクトとしてレンダリングされます。 |
| Forward | Unity 2.x で標準であったように、すべてのオブジェクトがマテリアルごとに 1 つのパスでレンダリングされます。 |
| Deferred Lighting (Unity Pro のみ) | ライティングなしで、すべてのオブジェクトが 1 回びゅおがされ、すべてのオブジェクトのライティングがレンダリング キューの最後で一緒にレンダリングされます。 |
| Target Texture (Unity Pro/Advanced のみ) | カメラ ビューのRender Texture への参照。 この参照を作成すると、この画面に対して、このカメラをレンダリングする機能が無効になります。 |
| HDR | カメラでハイダイナミックレンジ レンダリングをオンにします。 |
詳細
カメラは、プレイヤーにゲームを表示するのに重要です。 想像できるあらゆる種類の効果が得られるよう、カメラをカスタマイズ、記述またはパレンディングできます。 パズル ゲームの場合、パズル全体のビューに対して、カメラを静止に維持できます。 1 人称シューティングの場合、カメラをプレイヤーのキャラクターにパレンディングして、キャラクターの目線に配置することができます。 レース ゲームの場合、カメラがプレイヤーの後を追うように動かすことができます。
複数のカメラを作成し、それぞれに異なる「Depth」を割り当てることができます。 低い「Depth」から高い「Depth」にカメラが描画されます。 言い換えると、「Depth」が 2 のカメラは、「Depth」が 1 のカメラの上に描画されます。「Normalized View Port Rectangle」プロパティの値を調整して、画面上のカメラのビューのサイズ変更や配置を行うことができます。 これにより、ミサイル カムや、マップ ビュー、バック ミラーのような複数の小さいビューを作成できます。
レンダリング パス
Unity は、異なるレンダリング パスをサポートしています。 ゲームの内容や対象のプラットフォーム / ハードウェアに応じて、どのパスを使用するかを選ぶ必要があります。 レンダリング パスによって、主に光や影に影響する機能およびパフォーマンス特性が異なります。 プロジェクトに使用されるレンダリング パスはプレイヤー設定で選択されます。 さらに、各カメラに対して、レンダリング パスを無効にできます。
レンダリング パスの詳細については、rendering paths page を参照してください。
Clear Flags
各カメラは、そのビューをレンダリングする際に、色と深さに関する情報を記憶します。 描画されない画面の部分は空で、デフォルトではスカイボックスが表示されます。 複数のカメラを使用する場合、それぞれのカメラがバッファに色と深さに関する情報を記憶し、各カメラがレンダリングを行う際に、より多くのデータを蓄積します。 シーンで任意のカメラがそのビューをレンダリングするため、「Clear Flags」を設定して、バッファ情報の異なる集合をクリアできます。 これは、次の 4 つのオプションを選択することで行うことができます。
スカイボックス
これはデフォルトの設定です。 画面の空白の部分には、現在のカメラのスカイボックスが表示されます。 現在のカメラにスカイボックスが設定されない場合、Render Settings ( で表示) で選択したスカイボックスに戻ります。 これにより、「Background Color」に戻ります。 そうでない場合は、Skybox component をカメラに追加できます。 スカイボックスを新規作成したい場合は、you can use this guide を参照してください。
Solid Color
画面の空白の部分には、現在のカメラの「Background Color」が表示されます。
Depth Only
環境内でプレイヤーの銃を切り取らずに描画したい場合、1 台のカメラに対して「Depth」を 0 に設定して、環境を描画し、もう 1 台のカメラの「Depth」を 1 に設定して、武器のみを描画させます。 武器を表示するカメラの「Clear Flags」は、「Depth only」に設定する必要があります。 これにより、環境の映像表示が画面に維持されますが、各オブジェクトが 3D スペースに存在する場所に関する情報はすべて破棄されます。 銃が描画されると、銃がどの程度壁に近いかに関係なく、不透明な部分が描画されたものをすべて完全に覆います。

「銃は、カメラの深さバッファが描画前にクリアされた後に、最後に描画されます」
Don't Clear
このモードでは、色および深さバッファのいずれもクリアされません。 その結果、各フレームが次のフレーム上に描画され、シミのように見える効果が得られます。 これは通常ゲームでは使用されず、カスタムのシェーダーと併用される場合に最適です。
Clip Planes
「Near」と「Far Clip Plane」プロパティは、カメラのビューの開始および終了場所を決定します。 カメラの方向に対して垂直に面が配置され、その位置から測定されます。 「Near plane」は、レンダリングされる最も近い場所で、「Far plane」は最も遠い場所になります。
また、切り取り面は、バッファの精度が画面上にどのように分配されるかを決定します。 一般に、精度を高めるには、「Near plane」をできる限り遠くに移動する移動させる必要があります。
近くまたは遠くの切り取り面は、カメラのビューのフィールドで定義された面と共に、一般的にカメラの「錐台」と知られているものを記述します。 Unity では、オブジェクトをレンダリングする際に、この錐台外にあるオブジェクトは表示されません。 これは、錐台カリングと呼ばれます。 錐台カリングは、ゲーム内でオクルージョン カリングが使用されているか否かに関係なく発生します。
パフォーマンス上の理由から、より小さいオブジェクトを早めに間引きたい場合があるでしょう。 例えば、小さい岩や破片を大きい建物よりもより少ない距離で非表示にできます。 これを行うには、小さいオブジェクトを separate layer に置き、Camera.layerCullDistances スクリプト機能を使用して、レイヤーごとの間引き距離を設定できます。
Culling Mask
「Culling Mask」は、レイヤーを使用してオブジェクトのグループを選択的にレンダリングするのに使用されます。 * レイヤーの使用法については、here を参照してください。
異なるレイヤーにユーザー インターフェースを配置し、UI レイヤー自体で個々のカメラでユーザー インターフェース自体をレンダリングするのが一般的です。
また、UI をその他のカメラ ビューの上に表示するには、「Clear Flags」を「Depth only」に設定し、UI カメラの「Depth」をその他のカメラよりも高くする必要があります。
Normalized Viewport Rectangle
「Normalized Viewport Rectangles」は、現在のカメラ ビューが描画される画面の一定の部分を定義するためのものです。 画面の左下ににマップ ビューを、右上にミサイルチップ ビューを配置できます。 少し設計を行うだけで、「Viewport Rectangle」を使用して、独自の動作を作成できます。
「Normalized Viewport Rectangle」を使用して、2 プレイヤー用に 2 分割した画面効果を簡単に作成できます。 2 台のカメラを作成後、カメラの H 値を 0.5 に設定し、プレイヤー 1 の Y 値を 0.5 に、プレイヤー 2 の Y 値を 0 に変更します。これにより、プレイヤー 1 のカメラが画面の半分上から上部に表示され、プレイヤー 2 のカメラが下部で始まり、画面の半分で停止します。

「「Normalized Viewport Rectangle」で作成された 2 プレイヤー用表示」
Orthographic
カメラを「Orthographic」にすると、カメラ のビューからすべての景色が削除されます。 これは、等角または 2D ゲームの作成に便利です 。
霧は Orthographic カメラ モードで均一にレンダリングされるため、期待通りには表示されません。 理由については、component reference on Render Settings を参照してください。

「Perspective カメラ」

「Orthographic カメラ」 オブジェクトはここでの距離で小さくなりません。」
Render Texture
この機能は、Unity Advance ライセンスでのみ使用できます。 カメラのビューを、別のオブジェクトに適用される Texture に配置します。 これにより、競技場のビデオ モニターや監視カメラ、反射などを簡単に作成できます。

「アリーナの実況カメラの作成に使用される Render Texture」
ヒント
- その他の GameObject 同様、カメラはインスタンス化、パレンディング、および記述できます。
- レース ゲームでスピード感を高めるには、「Field of View」を高くします。
- Rigidbody コンポーネントを追加すると、物理特性のシミュレーションでカメラを使用できます。
- シーンで使用できるカメラの数には制限はありません。
- Orthographic カメラは、3D ユーザー インターフェースをさくせいするのにべんりです 。
- 深さのアーティファクトを経験している場合は (互いに近い面のちらつき)、「Near Plane」をできる限り大きく設定してみてください。
- カメラは、ゲーム画面および Render Texture に対して同時にレンダリングできません。いずれかのみになります。
- Pro のライセンスを所有している場合、より独自の効果を得るために、テクスチャにカメラのビューをレンダリングするためのオプション、Render-to-Texture を利用できます。
- Unity には、 から利用できるプリインストールされたカメラ スクリプトが用意されています。 これらを試して、何ができるかを試してみてください。
Terrains
このセクションではTerrain Engine(地形エンジン)の使用方法について説明します。作成の方法、技術的な詳細、その他の考慮事項をカバーします。次のセクションに分かれています:
Using Terrains
このセクションではTerrainの基本的な情報についてカバーします。これはTerrainの作成方法と新しいTerrainツールおよびブラシの使用方法を含みます。
Height
このセクションでは異なるツールおよびブラシを使用してTerrain(地形)のHeight(高さ)を変更する方法を説明します。
Terrain Textures
このセクションでは異なるブラシを使用して、Terrainテクスチャを追加、ペイント、ブレンドする方法を説明します。
Trees
このセクションではツリーアセットを作成する際に重要な情報を説明します。さらにTerrain上でツリーを追加、ペイントする方法もカバーします。
Grass
このセクションではGrass(草)の仕組みと使用方法を説明します。
Detail Meshes
このセクションでは詳細メッシュ(岩、ワラ、植生)の実践的な使用方法を説明します。
Lightmaps
Unity内臓のLightmapperにより他のどのようなオブジェクトとも同じようにTerrainにライトマップを適用することが出来ます。ヘルプが必要な場合はLightmap クイックスタート を参照のこと。
他の設定
このセクションではTerrainに関するその他全ての設定をカバーします。
モバイル パフォーマンスに関する留意事項
Terrainのレンダリングは相当にコストがかかるため、ローエンドのモバイルデバイスでは実用的ではありません。
Page last updated: 2010-06-03Asset Import and Creation
A large part of making a game is utilizing your asset source files in your GameObjects. This goes for textures, models, sound effects and behaviour scripts. Using the Project View inside Unity, you have quick access to all the files that make up your game:

The Project View displays all source files and created Prefabs
This view shows the organization of files in your project's Assets folder. Whenever you update one of your asset files, the changes are immediately reflected in your game!
To import an asset file into your project, move the file into in the Finder, and it will automatically be imported into Unity. To apply your assets, simply drag the asset file from the Project View window into the Hierarchy or Scene View. If the asset is meant to be applied to another object, drag the asset over the object.
Hints
- It is always a good idea to add labels to your assets when you are working with big projects or when you want to keep organized all your assets, with this you can search for the labels associated to each asset in the search field in the project view.
- When backing up a project folder always back up Assets, ProjectSettings and Library folders. The Library folder contains all meta data and all the connections between objects, thus if the Library folder gets lost, you will lose references from scenes to assets. Easiest is just to back up the whole project folder containing the Assets, ProjectSettings and Library folders.
- Rename and move files to your heart's content inside Project View; nothing will break.
- Never rename or move anything from the Finder or another program; everything will break. In short, Unity stores lots of metadata for each asset (things like import settings, cached versions of compressed textures, etc.) and if you move a file externally, Unity can no longer associate metadata with the moved file.
Continue reading for more information:
- Importing Assets
- Mesh
- Animations (Legacy)
- Materials and Shaders
- Textures
- Procedural Materials
- Movie Textures
- Audio Files
- Using Scripts
- Asset Store
- Asset Server (Pro Only)
- Behind the Scenes
Importing Assets
Unity will automatically detect files as they are added to your Project folder's Assets folder. When you put any asset into your Assets folder, you will see the asset appear in your Project View.

The Project View is your window into the Assets folder, normally accessible from the file manager
When you are organizing your Project View, there is one very important thing to remember:
Never move any assets or organize this folder from the Explorer (Windows) or Finder (OS X). Always use the Project View!
There is a lot of meta data stored about relationships between asset files within Unity. This data is all dependent on where Unity expects to find these assets. If you move an asset from within the Project View, these relationships are maintained. If you move them outside of Unity, these relationships are broken. You'll then have to manually re-link lots of dependencies, which is something you probably don't want to do.
So just remember to only save assets to the Assets folder from other applications, and never rename or move files outside of Unity. Always use Project View. You can safely open files for editing from anywhere, of course.
Creating and Updating Assets
When you are building a game and you want to add a new asset of any type, all you have to do is create the asset and save it somewhere in the Assets folder. When you return to Unity or launch it, the added file(s) will be detected and imported.
Additionally, as you update and save your assets, the changes will be detected and the asset will be re-imported in Unity. This allows you to focus on refining your assets without struggling to make them compatible with Unity. Updating and saving your assets normally from its native application provides optimum, hassle-free workflow that feels natural.
Asset Types
There are a handful of basic asset types that will go into your game. The types are:
- Mesh Files & Animations
- Texture Files
- Sound Files
We'll discuss the details of importing each of these file types and how they are used.
Meshes & Animations
Whichever 3D package you are using, Unity will import the meshes and animations from each file. For a list of applications that are supported by Unity, please see this page.
Your mesh file does not need to have an animation to be imported. If you do use animations, you have your choice of importing all animations from a single file, or importing separate files, each with one animation. For more information about importing animations, please see page about Animation Import.
Once your mesh is imported into Unity, you can drag it to the Scene or Hierarchy to create an instance of it. You can also add Components to the instance, which will not be attached to mesh file itself.
Meshes will be imported with UVs and a number of default Materials (one material per UV). You can then assign the appropriate texture files to the materials and complete the look of your mesh in Unity's game engine.
Textures
Unity supports all image formats. Even when working with layered Photoshop files, they are imported without disturbing the Photoshop format. This allows you to work with a single texture file for a very care-free and streamlined experience.
You should make your textures in dimensions that are to the power of two (e.g. 32x32, 64x64, 128x128, 256x256, etc.) Simply placing them in your project's Assets folder is sufficient, and they will appear in the Project View.
Once your texture has been imported, you should assign it to a Material. The material can then be applied to a mesh, Particle System, or GUI Texture. Using the Import Settings, it can also be converted to a Cubemap or Normalmap for different types of applications in the game. For more information about importing textures, please read the Texture Component page.
Sounds

Desktop
Unity features support for two types of audio: Uncompressed Audio or Ogg Vorbis. Any type of audio file you import into your project will be converted to one of these formats.
File Type Conversion
| .AIFF | Converted to uncompressed audio on import, best for short sound effects. |
| .WAV | Converted to uncompressed audio on import, best for short sound effects. |
| .MP3 | Converted to Ogg Vorbis on import, best for longer music tracks. |
| .OGG | Compressed audio format, best for longer music tracks. |
Import Settings
If you are importing a file that is not already compressed as Ogg Vorbis, you have a number of options in the Import Settings of the Audio Clip. Select the Audio Clip in the Project View and edit the options in the Audio Importer section of the Inspector. Here, you can compress the Clip into Ogg Vorbis format, force it into Mono or Stereo playback, and tweak other options. There are positives and negatives for both Ogg Vorbis and uncompressed audio. Each has its own ideal usage scenarios, and you generally should not use either one exclusively.
Read more about using Ogg Vorbis or Uncompressed audio on the Audio Clip Component Reference page.

iOS
Unity features support for two types of audio: Uncompressed Audio or MP3 Compressed audio. Any type of audio file you import into your project will be converted to one of these formats.
File Type Conversion
| .AIFF | Imports as uncompressed audio for short sound effects. Can be compressed in Editor on demand. |
| .WAV | Imports as uncompressed audio for short sound effects. Can be compressed in Editor on demand. |
| .MP3 | Imports as Apple Native compressed format for longer music tracks. Can be played on device hardware. |
| .OGG | OGG compressed audio format, incompatible with the iPhone device. Please use MP3 compressed sounds on the iPhone. |
Import Settings
When you are importing an audio file, you can select its final format and choose to force it to stereo or mono channels. To access the Import Settings, select the Audio Clip in the Project View and find the Audio Importer in the Inspector. Here, you can compress the Clip into Ogg Vorbis format, force it into Mono or Stereo playback, and tweak other options, such as the very important Decompress On Load setting.
Read more about using MP3 Compressed or Uncompressed audio on the Audio Clip Component Reference page.

Android
Unity features support for two types of audio: Uncompressed Audio or MP3 Compressed audio. Any type of audio file you import into your project will be converted to one of these formats.
File Type Conversion
| .AIFF | Imports as uncompressed audio for short sound effects. Can be compressed in Editor on demand. |
| .WAV | Imports as uncompressed audio for short sound effects. Can be compressed in Editor on demand. |
| .MP3 | Imports as MP3 compressed format for longer music tracks. |
| .OGG | Note: the OGG compressed audio format is incompatible with some Android devices, so Unity does not support it for the Android platform. Please use MP3 compressed sounds instead. |
Import Settings
When you are importing an audio file, you can select its final format and choose to force it to stereo or mono channels. To access the Import Settings, select the Audio Clip in the Project View and find the Audio Importer in the Inspector. Here, you can compress the Clip into Ogg Vorbis format, force it into Mono or Stereo playback, and tweak other options, such as the very important Decompress On Load setting.
Read more about using MP3 Compressed or Uncompressed audio on the Audio Clip Component Reference page.
Once sound files are imported, they can be attached to any GameObject. The Audio file will create an Audio Source Component automatically when you drag it onto a GameObject.
Page last updated: 2012-10-26Meshes
3Dモデルをインポートすると、UnityはMeshとして内部的に格納します。MeshはMeshフィルタ コンポーネント を使用してゲームオブジェクトにアタッチする必要があります。Meshを表示できるようにするためには、ゲームオブジェクトにはさらにMeshレンダラ あるいは他の適切なレンダラコンポーネントを持っている必要があります。これらのコンポーネントを使用することで、Meshはレンダラによって使用されるマテリアルどおりの外観で、ゲームオブジェクトの位置に表示されます。
UnityのMeshインポーターはMeshの生成を制御したり、テクスチャやマテリアルに関連付けるため、多くのオプションを用意しています。これらのオプションは、次のページで説明されています。
Page last updated: 2012-01-203D-formats
Unityにメッシュをインポートするには主に2つのファイルからできます:
- エクスポートされた3Dファイルフォーマット。例えば .FBX あるいは .OBJ
- 3Dアプリケーションの専用ファイル、たとえば
.Maxまたは.Blendなど3D Studio Maxか、Blenderのファイル形式をサポートします。
どちらでもUnityにメッシュに取り込むことができますが、どちらを選ぶかにあたって考慮事項があります。
エクスポートされた3Dファイル形式
Unityは.FBX 、.dae (Collada)、.3DS、.dxf および.obj、FBXエクスポータファイルを読み込むことができ、FBXエクスポータはここ 多くのアプリケーションでobjやColladaのエクスポータを見つけることが出来ます。
長所
- 必要なデータのみエクスポート
- 検証可能なデータ(Unityに持っていく前に3Dパッケージに再インポートできます)
- 一般的にファイルサイズが小さい
- モジュール単位のアプローチを推進できる(例. Collisionの型や相互関係性ごとにことなるコンポーネント)
- 直接サポートしていないその他の3Dパッケージで独自形式があっても、この形式にすればサポートできます
短所
- プロトタイプや反復作業としては時間のかかるパイプラインとなります
- ソース(作業ファイル)とゲームデータ(例えば、FBXでエクスポート)の間のバージョン管理を見失いないがちになります
独自の3Dアプリケーションファイル形式
Unityが変換を通してインポートできるファイル形式: Max、 Maya、Blender、Cinema4D、Modo、Lightwave、Cheetah3D、たとえば.MAX、.MB、.MAなど
長所
- 反復作業の時間がかからない(ソースファイルを保存すればUnityが自動で再インポート)
- 最初の作業がシンプル
短所
- Unityプロジェクトを使用しているすべてのマシンに当該ソフトウェアのライセンスをインストールする必要があります
- ファイルは、不要なデータにより肥大化することがありえます
- 容量の大きいファイルはUnityの更新を遅らせます
- 検証が少ないデータ (問題のトラブルシューティングが困難)
Animations (Legacy)
Unity's Animation System allows you to create beautifully animated skinned characters. The Animation System supports animation blending, mixing, additive animations, walk cycle time synchronization, animation layers, control over all aspects of the animation playback (time, speed, blend-weights), mesh skinning with 1, 2 or 4 bones per vertex and finally physically based ragdolls.
For best practices on creating a rigged character with optimal performance in Unity, we recommended that you check out the section on Modeling Optimized Characters.
The following topics are covered on this page:
Importing Inverse Kinematics
When importing animated characters from Maya that are created using IK, you have to check the Bake IK & simulation box in the Import Settings. Otherwise, your character will not animate correctly.
Bringing the character into the Scene
When you have imported your model you drag the object from the Project View into the Scene View or Hierarchy View
-0.jpg)
The animated character is added by dragging it into the scene
The character above has three animations in the animation list and no default animation. You can add more animations to the character by dragging animation clips from the Project View on to the character (in either the Hierarchy or Scene View). This will also set the default animation. When you hit Play, the default animation will be played.
Materials
There is a close relationship between Materials and Shaders in Unity. Shaders contain code that defines what kind of properties and assets to use. Materials allow you to adjust properties and assign assets.

A Shader is implemented through a Material
To create a new Material, use from the main menu or the Project View context menu. Once the Material has been created, you can apply it to an object and tweak all of its properties in the Inspector. To apply it to an object, just drag it from the Project View to any object in the Scene or Hierarchy.
Setting Material Properties
You can select which Shader you want any particular Material to use. Simply expand the drop-down in the Inspector, and choose your new Shader. The Shader you choose will dictate the available properties to change. The properties can be colors, sliders, textures, numbers, or vectors. If you have applied the Material to an active object in the Scene, you will see your property changes applied to the object in real-time.
There are two ways to apply a Texture to a property.
- Drag it from the Project View on top of the Texture square
- Click the button, and choose the texture from the drop-down list that appears
Two placement options are available for each Texture:
| Tiling | Scales the texture along the different. |
| Offset | Slides the texture around. |
Built-in Shaders
There is a library of built-in Shaders that come standard with every installation of Unity. There are over 30 of these built-in Shaders, and six basic families.
- Normal: For opaque textured objects.
- Transparent: For partly transparent objects. The texture's alpha channel defines the level of transparency.
- TransparentCutOut: For objects that have only fully opaque and fully transparent areas, like fences.
- Self-Illuminated: For objects that have light emitting parts.
- Reflective: For opaque textured objects that reflect an environment Cubemap.
In each group, built-in shaders range by complexity, from the simple VertexLit to the complex Parallax Bumped with Specular. For more information about performance of Shaders, please read the built-in Shader performance page
This grid displays a thumbnail of all built-in Shaders:

The builtin Unity shaders matrix
Shader technical details
Unity has an extensive Shader system, allowing you to tweak the look of all in-game graphics. It works like this:
A Shader basically defines a formula for how the in-game shading should look. Within any given Shader is a number of properties (typically textures). Shaders are implemented through Materials, which are attached directly to individual GameObjects. Within a Material, you will choose a Shader, then define the properties (usually textures and colors, but properties can vary) that are used by the Shader.
This is rather complex, so let's look at a workflow diagram:

On the left side of the graph is the Carbody Shader. 2 different Materials are created from this: Blue car Material and Red car Material. Each of these Materials have 2 textures assigned; the Car Texture defines the main texture of the car, and a Color FX texture. These properties are used by the shader to make the car finish look like 2-tone paint. This can be seen on the front of the red car: it is yellow where it faces the camera and then fades towards purple as the angle increases. The car materials are attached to the 2 cars. The car wheels, lights and windows don't have the color change effect, and must hence use a different Material. At the bottom of the graph there is a Simple Metal Shader. The Wheel Material is using this Shader. Note that even though the same Car Texture is reused here, the end result is quite different from the car body, as the Shader used in the Material is different.
To be more specific, a Shader defines:
- The method to render an object. This includes using different methods depending on the graphics card of the end user.
- Any vertex and fragment programs used to render.
- Some texture properties that are assignable within Materials.
- Color and number settings that are assignable within Materials.
A Material defines:
- Which textures to use for rendering.
- Which colors to use for rendering.
- Any other assets, such as a Cubemap that is required by the shader for rendering.
Shaders are meant to be written by graphics programmers. They are created using the ShaderLab language, which is quite simple. However, getting a shader to work well on a variety graphics cards is an involved job and requires a fairly comprehensive knowledge of how graphics cards work.
A number of shaders are built into Unity directly, and some more come in the Standard Assets Library. If you like, there is plenty more shader information in the Built-in Shader Guide.
Page last updated: 2010-09-16Textures
テクスチャ 2D
Texture により、Mesh、Particle やインターフェースがより活気付きます。 これらは重ねたり、オブジェクト周辺にラップする画像やムービー ファイルになります。 これらは非常に重要であるため、多くのプロパティを有しています。 初めてこれをレンダリングする場合は、Details に移動し、参照が必要な場合は、実際の設定に戻ります。
オブジェクトに使用するシェーダが必要なテクスチャに関する要件を追加しますが、画像ファイルをプロジェクト内に置くことができるということが基本的な原理です。 サイズ要件 (以下に記載) を満たすと、インポートされ、ゲームでの使用向けに最適化されます。 これにより、マルチ レイヤー Photoshop または TIFF ファイルに拡張され、インポート時に平坦化されるため、ゲームに対するサイズのペナルティはありません。
プロパティ
Texture Inspector は、その他ほとんどのものと見た目が若干ことなります。
上の部分には幾つかの設定が、下の部分には、Texture Importer とテクスチャ プレビューが含まれます。
テクスチャ インポータ
テクスチャはすべて Project Folder 内の画像からきます。 どのようにインポートされるかは、Texture Importer によって指定されます。 Project View でファイル テクスチャを選択し、Inspector で Import Settings を編集することで、テクスチャを変更します。
インスペクタの最上位にあるアイテムが メニューでありソース画像ファイルから作成するテクスチャのタイプを選択できます。
| Texture Type | テクスチャの目的に応じて、これを選択して、基本的なパラメータを設定できます。 |
| Texture | 一般に、すべてのテクスチャに使用できる最も一般的な設定です。 |
| Normal Map | これを選択すると、色をリアルタイムの通常マッピングに適した形式に変換させます。 詳細については、下の Normal Maps を参照してください。 |
| GUI | テクスチャを HUD/GUI Control で使用する場合にこれを使用します。 |
| Reflection | 別名キューブ マップ。テクスチャ上での反射を作成するのに使用します。詳細については、Cubemap Textures を参照してください。 |
| Cookie | ライトの Cookie に使用する基本パラメータでテキスチャを設定します。 |
| Advanced | テクスチャに特定のパラメータを設定し、テクスチャを完全に制御したい場合にこれを選択します。 |

「選択された基本テクスチャ設定」
| Alpha From Grayscale | 有効にすると、アルファ透過チャンネルが画像の既存の明るさと暗さの値で生成されます。 |
| Wrap Mode | テクスチャをタイルしたときの処理 |
| Repeat | テクスチャを繰り返しタイル |
| Clamp | テクスチャの端をストレッチ |
| Filter Mode | テクスチャが3Dへの変換によりストレッチされたときのフィルタ処理を選択 |
| Point | テクスチャを近くでみたときにブロック状になります |
| Bilinear | テクスチャを近くでみたときにぼやけます |
| Trilinear | Bilinearと同じですが、テクスチャはミップマップ間でもぼやけます |
| Aniso Level | 急な角度から眺めたときのテクスチャ品質を向上させます。床や地面のテクスチャに良い。 以下 を参照して下さい。 |

テクスチャインポータの法線マッピング設定
| Create from Greyscale | オンにした場合Bumpiness、Filteringオプションが表示されます |
| Bumpiness | バンプの強度を制御します |
| Filtering | バンプの強度を計算する方法を決定します |
| Smooth | スムーズな法線マップを生成します |
| Sharp | ゾーベルフィルタとしても知られています。標準よりもシャープな法線マップを生成します。 |
| Wrap Mode | テクスチャをタイルしたときの処理 |
| Repeat | テクスチャを繰り返しタイル |
| Clamp | テクスチャの端をストレッチ |
| Filter Mode | テクスチャが3Dへの変換によりストレッチされたときのフィルタ処理を選択 |
| Point | テクスチャを近くでみたときにブロック状になります |
| Bilinear | テクスチャを近くでみたときにぼやけます |
| Trilinear | Bilinearと同じですが、テクスチャはミップマップ間でもぼやけます |
| Aniso Level | 急な角度から眺めたときのテクスチャ品質を向上させます。床や地面のテクスチャに良い。 以下 を参照して下さい。 |

「テクスチャ インポータの GUI 設定」
| Filter Mode | テクスチャが3Dへの変換によりストレッチされたときのフィルタ処理を選択 |
| Point | テクスチャを近くでみたときにブロック状になります |
| Bilinear | テクスチャを近くでみたときにぼやけます |
| Trilinear | Bilinearと同じですが、テクスチャはミップマップ間でもぼやけます |

テクスチャインポータのCursor設定
| Wrap Mode | テクスチャをタイルしたときの処理 |
| Repeat | テクスチャを繰り返しタイル |
| Clamp | テクスチャの端をストレッチ |
| Filter Mode | テクスチャが3Dへの変換によりストレッチされたときのフィルタ処理を選択 |
| Point | テクスチャを近くでみたときにブロック状になります |
| Bilinear | テクスチャを近くでみたときにぼやけます |
| Trilinear | Bilinearと同じですが、テクスチャはミップマップ間でもぼやけます |

「テクスチャ インポータの反射設定」
| Mapping | これにより、テクスチャがキューブ マップにどのようにマッピングされるかが決まります。 |
| Sphere Mapped | 「球体状」のキューブ マップにテクスチャをマッピングします。 |
| Cylindrical | テクスチャを円柱にマッピングします。円柱のようなオブジェクトに反射を使用したい場合に使用します。 |
| Simple Sphere | テクスチャを簡単な球体にマッピングし、回転する際に反射を変形させます。 |
| Nice Sphere | テクスチャを球体にマッピングし、回転時に変形させますが、テクスチャのラップは確認できます。 |
| 6 Frames Layout | テクスチャは立方体の六つの面にキューブマップのレイアウトを展開し、十字架の形か、列順の画像( +x -x +y -y +z -z)をさらに縦か横か選択できます |
| Fixup edge seams | (ポイントライトのみ)光沢の強い反射光のある画像イメージのエッジのつなぎ目の画像乱れを取り除きます |
| Filter Mode | テクスチャが3Dへの変換によりストレッチされたときのフィルタ処理を選択 |
| Point | テクスチャを近くでみたときにブロック状になります |
| Bilinear | テクスチャを近くでみたときにぼやけます |
| Trilinear | Bilinearと同じですが、テクスチャはミップマップ間でもぼやけます |
| Aniso Level | 急な角度から眺めたときのテクスチャ品質を向上させます。床や地面のテクスチャに良い。 以下 を参照して下さい。 |
制御するのにグレースケール テクスチャを使用する方法です。 これは、移動する雲の作成や、密集する葉の印象を与えるのに便利です。 Light ページにこれに関する詳細が全て記載されていますが、テクスチャを使用可能にするには、 を「Cookie」に設定する必要があります。

「テクスチャ インポータの Cookie 設定」
| Light Type | テクスはが適用されるライトの種類。 (スポット ライト、ポイント ライト、ディクショナリ ライトが該当します)。 ディクショナリ ライトの場合、このテクスチャはタイルになるため、テクスチャ インスペクタでは、適切な効果を得るには、スポット ライトに対して、エッジ モードを「Repeat」に設定し、クッキーテクスチャのエッジを黒一色のままにしておく必要があります。 テクスチャ インスペクタで、エッジ モードを「Clamp」に設定する必要があります。 |
| Mapping | これにより、テクスチャがキューブ マップにどのようにマッピングされるかが決まります。 |
| Sphere Mapped | 「球体状」のキューブ マップにテクスチャをマッピングします。 |
| Cylindrical | テクスチャを円柱にマッピングします。円柱のようなオブジェクトに反射を使用したい場合に使用します。 |
| Simple Sphere | テクスチャを簡単な球体にマッピングし、回転する際に反射を変形させます。 |
| Nice Sphere | テクスチャを球体にマッピングし、回転時に変形させますが、テクスチャのラップは確認できます。 |
| 6 Frames Layout | テクスチャは立方体の六つの面にキューブマップのレイアウトを展開し、十字架の形か、列順の画像( +x -x +y -y +z -z)をさらに縦か横か選択できます |
| Fixup edge seams | (ポイントライトのみ)光沢の強い反射光のある画像イメージのエッジのつなぎ目の画像乱れを取り除きます |
| Alpha From Greyscale | 有効にすると、アルファ透過チャンネルが画像の既存の明るさと暗さの値で生成されます。 |

「テクスチャインポータでのライトマップ設定」
| Filter Mode | テクスチャが3Dへの変換によりストレッチされたときのフィルタ処理を選択 |
| Point | テクスチャを近くでみたときにブロック状になります |
| Bilinear | テクスチャを近くでみたときにぼやけます |
| Trilinear | Bilinearと同じですが、テクスチャはミップマップ間でもぼやけます |
| Aniso Level | 急な角度から眺めたときのテクスチャ品質を向上させます。床や地面のテクスチャに良い。 以下 を参照して下さい。 |

「詳細テクスチャ インポータ設定ダイアログ」
| Non Power of 2 |テクスチャが 2 のべき乗サイズでない場合、これにより、インポート時のスケーリング動作が定義されます (詳細については、下記の Texture Sizes を参照)。 | |
| None | GUITexture コンポーネントでと併用するのに、テクスチャが次のより大きい2 のべき乗サイズに追加されます。 |
| To nearest | テクスチャがインポート時に最も近い 2 のべき乗に縮小拡大されます。 例えば、257x511 のテクスチャは、256x512 になります。 PVRTC 形式には、テクスチャを正方形 (幅と高さが等しい) にする必要があるため、最終的なサイズは 512x512 に拡大されます。 |
| To larger | テクスチャがインポート時に次に大きい 2 のべき乗に縮小拡大されます。 例えば、257x511 のテクスチャは、512x512 になります。 |
| To smaller | テクスチャがインポート時に次に小さい 2 のべき乗に縮小拡大されます。 例えば、257x511 のテクスチャは、256x256 になります。 |
| Generate Cube Map | 各種生成方法を使用して、テクスチャからキューブ マップを生成します。 |
| Spheremap | テクスチャを球状のキューブマップにマッピング |
| Cylindrical | テクスチャを円柱にマッピング。オブジェクトの反射光を円柱状にしたい場合に使用 |
| SimpleSpheremap | テクスチャをシンプルな球にマッピング、回転のときには反射光は崩れます |
| NiceSpheremap | テクスチャを球にマッピング、回転のときには反射光は崩れますがテクスチャはラッピングします |
| FacesVertical | テクスチャは立方体の六つの面を縦に展開し順序は +x -x +y -y +z -z |
| FacesHorizontal | テクスチャは立方体の六つの面を横に展開し順序は +x -x +y -y +z -z |
| CrossVertical | テクスチャは立方体の六つの面を縦長の十字架として展開 |
| CrossHorizontal | テクスチャは立方体の六つの面を横長の十字架として展開 |
| Read/Write Enabled | これを選択すると、スクリプトからテクスチャ データにアクセスできます (GetPixels、SetPixels と その他の Texture2D 機能)。 しかし、テクスチャデータのコピーが作成され、テクスチャ アセットに必要なメモリ量を 2 倍にします。 本当に必要な場合にのみ使用してください。 非圧縮および DTX 圧縮テクスチャにのみ有効であり、その他の圧縮テクスチャから読み取ることはできません。 デフォルトでは、無効になっています。 |
| Import Type | 画像データの処理方法 |
| Default | 標準的なテクスチャ |
| Normal Map | テクスチャを法線マップとして処理(他のオプションを有効にします) |
| Lightmap | テクスチャをライトマップとして処理(他のオプションを無効にします) |
| Alpha from grayscale | (Defaultのみ)画像の明度情報からアルファチャンネルを生成 |
| Create from grayscale | (Normal Mapのみ)画像の明度からマップを生成 |
| Bypass sRGB sampling | (Defaultのみ)ガンマ情報を考慮せず、画像の色をそのまま使用(テクスチャがGUIや画像データ以外をエンコードする際に便利) |
| Generate Mip Maps | これを選択すると、ミニ マップの生成が有効になります。 ミニ マップはより小さいテクスチャで、テクスチャが画面上で非常に小さい場合に使用されます。 詳細については、下の Mip Maps を参照してください。 |
| In Linear Space | ミップマップをリニアカラー空間で生成する |
| Border Mip Maps | これを選択すると、色が下位のミップ レベルの端ににじみ出ることがなくなります。 ライト Cookie (下記参照) に使用されます。 |
| Mip Map Filtering | 画質を最適化できるミップ マップ フィルタリングには次の 2 つの方法があります。 |
| Box | ミップ マップをフェードアウトする最も簡単な方法。ミップ レベルは、サイズが小さくなるに連れ、より滑らかになります。 |
| Kaiser | 鋭角化カイザー アルゴリズムは、サイズが小さくなるに連れ、ミップ マップで実行されます。 テクスチャが遠くでぼやけが多すぎる場合、このオプションを試してください。 |
| Fade Out Mips | ミップ レベルが上がるに連れ、ミップ マップをグレーにフェードするのに、これを有効にします。 これは、詳細マップに使用されます。 一番左のスクロールは、フェードアウトを始める最初のミップ レベルです。 一番右のスクロールは、テクスチャが完全にグレーアウトするミップレベルを定義します。 |
| Wrap Mode | テクスチャをタイルしたときの処理 |
| Repeat | テクスチャを繰り返しタイル |
| Clamp | テクスチャの端をストレッチ |
| Filter Mode | テクスチャが3Dへの変換によりストレッチされたときのフィルタ処理を選択 |
| Point | テクスチャを近くでみたときにブロック状になります |
| Bilinear | テクスチャを近くでみたときにぼやけます |
| Trilinear | Bilinearと同じですが、テクスチャはミップマップ間でもぼやけます |
| Aniso Level | 急な角度から眺めたときのテクスチャ品質を向上させます。床や地面のテクスチャに良い。 以下 を参照して下さい。 |
プラットフォームごとの無効化
異なるプラットフォームを作成する場合、対象のプラットフォームに対するテクスチャの解像度やサイズ、画質を考慮する必要があります。 これらのオプションをデフォルト設定にしつつ、特定のプラットフォームで特定の値を割当てることができます。

「すべてのプラットフォーム用のデフォルト設定」
| Max Texture Size | インポートされたテクスチャの最大サイズ。 アーティストは、大きなテクスチャを扱いたい場合が多くあります。これで、テクスチャを適切なサイズに縮小します。 |
| Texture Format | テクスチャに対して使用される内部表示。 サイズと画質間でのトレードオフとなります。 下記の例では、256 x 256 ピクセルのゲーム内テクスチャの最終サイズを示しています。 |
| Compressed | 圧縮された RGB テクスチャ。 これは、デフューズ テクスチャの最も一般的な形式になります。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| 16 bit | 低画質 True Color。 16 段階の赤、緑、青、アルファがあります。 |
| Truecolor | Truecolor、最高画質になります。 256x256 テクスチャの場合は、256 KB。 |
を に設定している場合、 は異なる値になります。
デスクトップ
| Texture Format | テクスチャに対して使用される内部表示。 サイズと画質間でのトレードオフとなります。 下記の例では、256 x 256 ピクセルのゲーム内テクスチャの最終サイズを示しています。 |
| RGB Compressed DXT1 | 圧縮された RGB テクスチャ。 これは、デフューズ テクスチャの最も一般的な形式になります。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| RGBA Compressed DXT5 | 圧縮された RGBA テクスチャ。 これは、デフューズおよびスペキュラ制御テクスチャに使用される主な形式になります。 1 バイト/ピクセル (256x256 テクスチャの場合は、64 KB)。 |
| RGB 16 bit | アルファなしの 65,000 色。 圧縮 DXT 形式は、メモリをあまり使用せず、通常は見た目もよくなります。 256x256 テクスチャの場合は、128 KB。 |
| RGB 24 bit | アルファなしの TrueColor。 256x256 テクスチャの場合は、192 KB。 |
| Alpha 8 bit | 色なしの高画質アルファ チャンネル。 256x256 テクスチャの場合は、64 KB。 |
| RGBA 16 bit | 低画質 True Color。 16 段階の赤、緑、青、アルファがあります。 圧縮 DXT 形式は、メモリをあまり使用せず、通常は見た目もよくなります。 256x256 テクスチャの場合は、128 KB。 |
| RGBA 32 bit | アルファのある Truecolor。最高画質になります。 256x256 テクスチャの場合は、256 KBで、費用がかかります。 ほとんどの場合、DXT5は、はるかに小さいサイズで十分な画質を提供します。 DXT 圧縮は目に見える画質損失を生じるため、これは主に法線マップに使用します。 |
iOS
| Texture Format | テクスチャに対して使用される内部表示。 サイズと画質間でのトレードオフとなります。 下記の例では、256 x 256 ピクセルのゲーム内テクスチャの最終サイズを示しています。 |
| RGB Compressed PVRTC 4 bits | 圧縮された RGB テクスチャ。 これは、デフューズ テクスチャの最も一般的な形式になります。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| RGBA Compressed PVRTC 4 bits | 圧縮された RGBA テクスチャ。 これは、透明性のあるデフューズおよびスペキュラ制御テクスチャに使用される主な形式になります。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| RGB Compressed PVRTC 2 bits | 圧縮された RGB テクスチャ。 デフューズ テクスチャに適したより低い画質形式。 ピクセルあたり 2 ビット (256x256 テクスチャの場合は、16 KB)。 |
| RGBA Compressed PVRTC 2 bits | 圧縮された RGBA テクスチャ。 デフューズおよびスペキュラ コントロール テクスチャに適したより低い画質形式。 ピクセルあたり 2 ビット (256x256 テクスチャの場合は、16 KB)。 |
| RGB Compressed DXT1 | 圧縮された RGB テクスチャ。 この形式は iOS ではサポートされていませんが、デスクトップとの下位互換性に対して維持されます。 |
| RGBA Compressed DXT5 | 圧縮された RGBA テクスチャ。 この形式は iOS ではサポートされていませんが、デスクトップとの下位互換性に対して維持されます。 |
| RGB 16 bit | アルファなしの 65,000 色。 PVRTC 形式よりも多くのメモリを使用しますが、UI または階調度のないクリスプ テクスチャにより適している場合があります。 256x256 テクスチャの場合は、128 KB。 |
| RGB 24 bit | アルファなしの TrueColor。 256x256 テクスチャの場合は、192 KB。 |
| Alpha 8 bit | 色なしの高画質アルファ チャンネル。 256x256 テクスチャの場合は、64 KB。 |
| RGBA 16 bit | 低画質 True Color。 16 段階の赤、緑、青、アルファがあります。 PVRTC 形式よりも多くのメモリを使用しますが、正確なアルファ チャンネルが必要な場合に便利な場合があります。 256x256 テクスチャの場合は、128 KB。 |
| RGBA 32 bit | アルファのある Truecolor。最高画質になります。 256x256 テクスチャの場合は、256 KBで、費用がかかります。 ほとんどの場合、PVRTCは、はるかに小さいサイズで十分な画質を提供します。 |
| Compression quality | Fastで高パフォーマンス、Bestで高画質、Normalでふたつのバランスをとります |

Android
| Texture Format | テクスチャに対して使用される内部表示。 サイズと画質間でのトレードオフとなります。 下記の例では、256 x 256 ピクセルのゲーム内テクスチャの最終サイズを示しています。 |
| RGB Compressed DXT1 | 圧縮された RGB テクスチャ。 Nvidia Tegra でサポートされています。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| RGBA Compressed DXT5 | 圧縮された RGBA テクスチャ。 Nvidia Tegra でサポートされています。 ピクセルあたり 6 ビット (256x256 テクスチャの場合は、64 KB)。 |
| RGB Compressed ETC 4 bits | 圧縮された RGB テクスチャ。 これは、Android プロジェクトのデフォルトのテクスチャ形式になります。 ETC1 は、OpenGL ES 2.0 の一部で、すべての OpenGL ES 2.0 GPU でサポートされています。 アルファはサポートしていません。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| RGB Compressed PVRTC 2 bits | 圧縮された RGB テクスチャ。 Imagination PowerVR GPU でサポートされています。 ピクセルあたり 2 ビット (256x256 テクスチャの場合は、16 KB)。 |
| RGBA Compressed PVRTC 2 bits | 圧縮された RGBA テクスチャ。 Imagination PowerVR GPU でサポートされています。 ピクセルあたり 2 ビット (256x256 テクスチャの場合は、16 KB)。 |
| RGB Compressed PVRTC 4 bits | 圧縮された RGB テクスチャ。 Imagination PowerVR GPU でサポートされています。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| RGBA Compressed PVRTC 4 bits | 圧縮された RGBA テクスチャ。 Imagination PowerVR GPU でサポートされています。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| RGB Compressed ATC 4 bits | 圧縮された RGB テクスチャ。 Qualcomm Snapdragon でサポートされています。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| RGBA Compressed ATC 8 bits | 圧縮された RGBA テクスチャ。 Qualcomm Snapdragon でサポートされています。 ピクセルあたり 6 ビット (256x256 テクスチャの場合は、64 KB)。 |
| RGB 16 bit | アルファなしの 65,000 色。 圧縮形式よりも多くのメモリを使用しますが、UI または階調度のないクリスプ テクスチャにより適している場合があります。 256x256 テクスチャの場合は、128 KB。 |
| RGB 24 bit | アルファなしの TrueColor。 256x256 テクスチャの場合は、192 KB。 |
| Alpha 8 bit | 色なしの高画質アルファ チャンネル。 256x256 テクスチャの場合は、64 KB。 |
| RGBA 16 bit | 低画質 True Color。 アルファ チャンネルのあるテクスチャに対するデフォルトの圧縮。 256x256 テクスチャの場合は、128 KB。 |
| RGBA 32 bit | アルファのある Truecolor。アルファのあるテクスチャに対する最高画質圧縮になります。 256x256 テクスチャの場合は、256 KB。 | |
| Compression quality | Fastで高パフォーマンス、Bestで高画質、Normalでふたつのバランスをとります |
Tegra など特定のハードウェアを対象としていない場合、ETC1 圧縮の使用をお勧めします。 必要な場合、外部のアルファ チャンネルを格納し、より低いテクスチャ フットプリントからメリットが得られます。 テクスチャにアルファ チャンネルを本当に格納したい場合、RGBA16 ビットは、すべてのハードウェア ベンダーが対応している圧縮方法になります。
アプリケーションがサポートされていないテクスチャ圧縮を使用する場合、テクスチャは、RGBA 32 に解凍され、圧縮テクスチャと共にメモリに格納されます。 この場合、テクスチャの解凍に無駄な時間を使い、2 回格納することでメモリも無駄になります。 これはまた、レンダリング パフォーマンスに大きな悪影響を及ぼす場合があります。
Flash
| Format | Image format |
| RGB JPG Compressed | RGB image data compressed in JPG format |
| RGBA JPG Compressed | RGBA image data (ie, with alpha) compressed in JPG format |
| RGB 24-bit | Uncompressed RGB image data, 8 bits per channel |
| RGBA 32-bit | Uncompressed RGBA image data, 8 bits per channel |
詳細
対応形式
Unity は、次の画像ファイル形式をサポートしています。 PSD、TIFF、JPG、TGA、PNG、GIF、BMP、IFF、PICT。 Unity はマルチ レイヤー PSD & TIFF ファイルを適切にインポートできます。 これらはインポート時に自動的に平坦化されますが、レイヤーは、それ自体アセットに維持されるため。これらのファイルタイプをネイティブに使用する際も作業が無駄になることはありません。 これは、Photoshop から使用できるテクスチャのコピーの 1 つを 3D モデリング アプリケーションから Unity に作成できるので重要です。
テクスチャ サイズ
理想的には、テクスチャは両側が 2 のべき乗になります。 これらのサイズは次のようになります。 2、4、8、16、32、64、128、256、512、1024 または 2048 ピクセル。 テクスチャは正方形である必要はありません。つまる、幅と高さは異なっていても構いません。
Unity では別のテクスチャ サイズ (2 のべき乗以外) を使用することができます。 2 のべき乗以外のテクスチャ サイズは、GUI Textures で使用されるのがベストですが、他で使用される場合は、非圧縮の RGBA 32 ビット形式に変換されます。 つまり、このテクスチャ サイズは、ビデオ メモリ (PVRT (iOS)/DXT (デスクトップ) 圧縮テクスチャ) を使用するため、ロードやレンダリングにより時間がかかります (iOS モード時)。 一般に、2 のべき乗以外のサイズは GUI 目的にのみ使用します。
2 のべき乗以外のテクスチャ アセット、インポート設定に詳細テクスチャ タイプの「Non Power of 2」オプションを使用して、インポート時に拡大できます。 Unity は、要求に応じて、テクスチャの内容を縮小拡大し、ゲーム内では、このテクスチャの内容は他のテクスチャ同様動作するため、圧縮でき、非常に高速でロードされます。
2 のべき乗以外のテクスチャの潜在的な問題として、Unityが内部処理で2のべき乗のテクスチャに変換し、ストレッチ処理がわずかな画像の乱れを引き起こすことがあります。
UV マッピング
3D モデルに 2D テクスチャをマッピングすると、ある種のラッピングが行われます。 これは、UV mapping と呼ばれ、3D モデリング アプリケーションで行われます。 Unity 内で、Materials を使用して、テクスチャをスケールおよび移動させることができます。 法線および詳細マップのスケーリングは特に便利です。
ミップ マップ
ミップ マップは、徐々に縮小していく画像で、リアルタイムの 3D エンジンでのパフォーマンを最適化するのに使用されます。 カメラから遠くにあるオブジェクトは、より小さいテクスチャを使用します。 ミップ マップを使用することで、33% のメモリしか使用しませんが、使用しないと、大きなパフォーマンス損失が生じます。 必ずゲーム内のテクスチャにミップ マップを使用した方がよいでしょう。小型化されないテクスチャの場合が唯一の例外です。
法線マップ
法線マップは、ポリゴンの少ないモデルをより多くの細部を持っているように見せる場合に法線マップ シェーダがあるように見せる場合にに使用されます。 Unity は、RGB 画像として符号化された法線マップを使用します。 グレースケールの高さマップ画像から法線マップを生成するオプションもあります。
詳細マップ
地形を作成したい場合は、通常、メイン テクスチャを使用して、くさや岩、砂などのエリアがどこにあるかを示します。 地形が適切なサイズの場合、最終的にぼやけてしまいます。 メイン テクスチャが近づくに連れ、Detail textures は、細かい細部をフェードインすることでこの事実を隠します。
詳細テクスチャを描画時に、ニュートラルのグレーが非表示になり、白がメイン テクスチャを 2 倍明るく、黒がメイン テクスチャを完全な黒に見せます。
反射 (キューブ マップ)
反射マップにテクスチャを使用したい場合 (例:「Reflective」組み込みシェーダを使用)、Cubemap Textures を使用する必要があります。
異方性フィルタリング
異方性フィルタリングは、グレージング角から表示された時に、レンダリング費用をある程度犠牲にして画質を向上します (この費用は全体的にグラフィック カードに依存します)。 異方性レベルを上げるのは通常、地面および床テクスチャにとってよいアイディアです。 Quality Settings では、異方性フィルタリングは、すべてのテクスチャに強制的に実行できるか、全体的に無効にできます。
「地面テクスチャに使用される非異方性 (左)/最大異方性(右)」
Procedural Materials
Unity incorporates a new asset type known as Procedural Materials. These are essentially the same as standard Materials except that the textures they use can be generated at runtime rather than being predefined and stored.
The script code that generates a texture procedurally will typically take up much less space in storage and transmission than a bitmap image and so Procedural Materials can help reduce download times. Additionally, the generation script can be equipped with parameters that can be changed in order to vary the visual properties of the material at runtime. These properties can be anything from color variations to the size of bricks in a wall. Not only does this mean that many variations can be generated from a single Procedural Material but also that the material can be animated on a frame-by-frame basis. Many interesting visual effects are possible - imagine a character gradually turning to stone or acid damaging a surface as it touches.
Unity's Procedural Material system is based around an industry standard product called Substance, developed by Allegorithmic
Supported Platforms
In Unity, Procedural Materials are fully supported for standalone and webplayer build targets only (Windows and Mac OS X). For all other platforms, Unity will pre-render or bake them into ordinary Materials during the build. Although this clearly negates the runtime benefits of procedural generation, it is still useful to be able to create variations on a basic material in the editor.
Adding Procedural Materials to a Project
A Procedural Material is supplied as a Substance Archive file (SBSAR) which you can import like any other asset (drag and drop directly onto the Assets folder or use ). A Substance Archive asset contains one or more Procedural Materials and contains all the scripts and images required by these. Uncompiled SBS files are not supported.
Although they are implemented differently, Unity handles a Procedural Material just like any other Material. To assign a Procedural Material to a mesh, for example, you just drag and drop it onto the mesh exactly as you would with any other Material.
Procedural Properties
Each Procedural Material is a custom script which generates a particular type of material. These scripts are similar to Unity scripts in that they can have variables exposed for assignment in the inspector. For example, a "Brick Wall" Procedural Material could expose properties that let you set the number of courses of bricks, the colors of the bricks and the color of the mortar. This potentially offers infinite material variations from a single asset. These properties can also be set from a script at runtime in much the same way as the public variables of a MonoBehaviour script.
Procedural Materials can also incorporate complex texture animation. For example, you could animate the hands of the clock or cockroaches running across a floor.

Creating Procedural Materials From Scratch
Procedural Materials can work with any combination of procedurally generated textures and stored bitmaps. Additionally, included bitmap images can be filtered and modified before use. Unlike a standard Material, a Procedural Material can use vector images in the form of SVG files which allows for resolution-independent textures.
The design tools available for creating Procedural Materials from scratch use visual, node-based editing similar to the kind found in artistic tools. This makes creation accessible to artists who may have little or no coding experience. As an example, here is a screenshot from Allegorithmic's Substance Designer which shows a "brick wall" Procedural Material under construction:

Obtaining Procedural Materials
Since Unity's Procedural Materials are based on the industry standard Substance product, Procedural Material assets are readily available from internet sources, including Unity's own Asset Store. Allegorithmic's Substance Designer can be used to create Procedural Materials, but there are other applications (3D modelling apps, for example) that incorporate the Substance technology and work just as well with Unity.
Performance and Optimization
Procedural Materials inherently tend to use less storage than bitmap images. However, the trade-off is that they are based around scripts and running those scripts to generate materials requires some CPU and GPU resources. The more complex your Procedural Materials are, the greater their runtime overhead.
Procedural Materials support a form of caching whereby the material is only updated if its parameters have changed since it was last generated. Further to this, some materials may have many properties that could theoretically be changed and yet only a few will ever need to change at runtime. In such cases, you can inform Unity about the variables that will not change to help it cache as much data as possible from the previous generation of the material. This will often improve performance significantly.
Procedural Materials can also be used purely as a convenience in the editor (ie, you can generate a standard Material by setting the parameters of a Procedural Material and then "baking" it). This will remove the runtime overhead of material generation but naturally, the baked materials can't be changed or animated during gameplay.
Using the Substance Player to Analyze Performance
Since the complexity of a Procedural Material can affect runtime performance, Allegorithmic incorporates profiling features in its Substance Player tool. This tool is available to download for free from Allegorithmic's website.
Substance Player uses the same optimized rendering engine as the one integrated into Unity, so its rendering measurement is more representative of performance in Unity than that of Substance Designer.
Page last updated: 2012-10-12Video Files
注意: Unity Pro/ Advancedのみ
デスクトップ!
Movie Texturesは、ビデオファイルから作成され、アニメーション化されたTextureです。
プロジェクトのに動画ファイルを配置することによって、通常使用するTexture とまったく同じように使用できるビデオをインポートすることができます。
動画ファイルはApple社のQuickTimeを介してインポートされます。サポートされるファイルの種類はインストールされたQuickTimeのがサポートするものと一致します(通常は .mov、 .mpg、 .mpeg、 .mp4、 .avi、 .asf)Windows上でムービーがインポートされるためにはQuickTimeがインストールされていることが必要です((ここ ) からダウンロード)
プロパティ
Movie Textures Inspectorは、通常のTexture Inspectorと非常によく似ています。

Unity上でビデオファイルから生成したMovie Textures
| Aniso Level | 急な角度から眺めたときのTexture品質を向上させます。床や地面のTextureに良い。 |
| Filter Mode | Textureが3Dへの変換によりストレッチされたときのフィルタ処理を選択 |
| Loop | オンの時、ムービー再生終了後にループ |
| Quality | Ogg Theoraビデオファイルの圧縮。より高い値により、品質がより高い一方でファイルサイズは大きくなる。 |
詳細
ビデオファイルがプロジェクトに追加されると、自動的にインポートされ、Ogg Theora形式に変換されます。一度Movie Texturesがインポートすると、通常のTextureのように、任意のGameObjectまたはMaterialにアタッチできます。
ムービーを再生
ゲームの実行開始時に、Movie Texturesは自動再生されません。再生を指示するスクリプトを準備する必要があります。
// このコードによりMovie Texturesが再生されます renderer.material.mainTexture.Play();
スペースが押されたときに動画再生をプレイバックに切り替えるためには、次のスクリプトをアタッチします。
function Update () {
if (Input.GetButtonDown ("Jump")) {
if (renderer.material.mainTexture.isPlaying) {
renderer.material.mainTexture.Pause();
}
else {
renderer.material.mainTexture.Play();
}
}
}
Movie Texturesを再生する方法の詳細については、Movie Textures スクリプトリファレンス を参照してください。
ムービーオーディオ
Movie Texturesをインポートすると、映像とともにオーディオトラックもインポートされます。オーディオは、Movie TexturesのAudioClipを子オブジェクトとして表示されます。

ビデオのオーディオトラックはProject ViewにてMovie Texturesの子オブジェクトとして表示されます
このオーディオを再生するには、他のオーディオクリップのように、ゲームオブジェクトにアタッチする必要があります。Project Viewから、シーンビューか階層ビューの任意のゲームオブジェクトへドラッグします。
通常、ムービーを見せているのと同じゲームオブジェクトになります。次に audio.Play() を使用し、映像に合わせてオーディオトラックを再生します。

iOS
Movie Texturesは、iOS上ではサポートされません。代わりに、Handheld.PlayFullScreenMovie を使用してフルスクリーン ストリーミング再生が提供される。
プロジェクト ディレクトリのStreamingAssetsフォルダ内にビデオを格納する必要があります。
iOSデバイス上で正しく再生できるファイルタイプはUnityのiOSでサポートされるため、(.mov, .mp4, .mpv, and .3gp )の拡張子や、次の圧縮規格はサポートされます:
- H.264 ベースライン プロファイルレベル3.0 ビデオ
- MPEG-4 Part 2 ビデオ
サポートされている圧縮規格の詳細については、iPhone SDKを参照してください。 MPMoviePlayerController クラスリファレンス
iPhoneUtils.PlayMovie あるいは iPhoneUtils.PlayMovieURL をコールすると画面は現在のコンテンツから、指定された背景色にフェードアウトします。ムービーが再生できる状態になるまでに、少し時間がかかるかもしれませんが、その間プレイヤーは背景色が表示され続けるとともにムービーのロード時間の進行状況インジケータを表示することもできます。再生が終了すると、画面は元のコンテンツに戻るためにフェードバックします。
ビデオプレーヤーは、ビデオ再生時のミュート切替は無視します
すでに書いたように、ビデオファイルはAppleの埋め込みプレーヤーを使用して再生されます。(SDK 3.2およびiPhone OS 3.1.2およびそれ以前のバージョン)このプレーヤーにはバグが含まれており、Unityではミュートに切替えることが出来ません。
ビデオプレーヤーは、デバイスの向きを無視します
アップル社ビデオプレーヤーとiPhone SDKはビデオの向きを調整する方法を提供していません。一般的なアプローチは、手動で各ムービーの複製を2つ、ランドスケープやポートレートの向きで、作成することです。これにより、デバイスの向きをプレイバック前に判定することで、正しいムービーを選択して再生することができる。

Android
Movie Texturesは、Android上ではサポートされません。代わりに、Handheld.PlayFullScreenMovie を使用してフルスクリーン ストリーミング再生が提供される。
プロジェクト ディレクトリのStreamingAssetsフォルダ内にビデオを格納する必要があります。
Androidデバイス上で正しく再生できるファイルタイプはUnityのAndroidでサポートされるため、(.mp4, and .3gp )の拡張子や、次の圧縮規格はサポートされます:
- H.263
- H.264 AVC
- MPEG-4 SP
ただし、デバイスベンダーによりこのリストのサポート範囲は拡大しており、Androidの再生フォーマットが再生できるようなっているたえm、いくつかのAndroid端末はHD動画など、のフォーマットを再生することができます。
サポートされている圧縮規格の詳細については、Android SDKを参照してくださいコアメディアフォーマットのドキュメント
iPhoneUtils.PlayMovie あるいは iPhoneUtils.PlayMovieURL をコールすると画面は現在のコンテンツから、指定された背景色にフェードアウトします。ムービーが再生できる状態になるまでに、少し時間がかかるかもしれませんが、その間プレイヤーは背景色が表示され続けるとともにムービーのロード時間の進行状況インジケータを表示することもできます。再生が終了すると、画面は元のコンテンツに戻るためにフェードバックします。
Audio Files
As with Meshes or Textures, the workflow for Audio File assets is designed to be smooth and trouble free. Unity can import almost every common file format but there are a few details that are useful to be aware of when working with Audio Files.
Audio in Unity is either Native or Compressed. Unity supports most common formats (see the list below) and will import an audio file when it is added to the project. The default mode is Native, where the audio data from the original file is imported unchanged. However, Unity can also compress the audio data on import, simply by enabling the Compressed option in the importer. (iOS projects can make use of the hardware decoder - see the iOS documentation for further details). The difference between Native and Compressed modes are as follows:-
- Native: Use Native (WAV, AIFF) audio for short sound effects. The audio data will be larger but sounds won't need to be decoded at runtime.
- Compressed: The audio data will be small but will need to be decompressed at runtime, which entails a processing overhead. Depending on the target, Unity will encode the audio to either Ogg Vorbis(Mac/PC/Consoles) or MP3 (Mobile platforms). For the best sound quality, supply the audio in an uncompressed format such as WAV or AIFF (containing PCM data) and let Unity do the encoding. If you are targeting Mac and PC platforms only (including both standalones and webplayers) then importing an Ogg Vorbis file will not degrade the quality. However, on mobile platforms, Ogg Vorbis and MP3 files will be re-encoded to MP3 on import, which will introduce a slight quality degradation.
Any Audio File imported into Unity is available from scripts as an Audio Clip instance, which is effectively just a container for the audio data. The clips must be used in conjunction with Audio Sources and an Audio Listener in order to actually generate sound. When you attach your clip to an object in the game, it adds an Audio Source component to the object, which has Volume, Pitch and a numerous other properties. While a Source is playing, an Audio Listener can "hear" all sources within range, and the combination of those sources gives the sound that will actually be heard through the speakers. There can be only one Audio Listener in your scene, and this is usually attached to the Main Camera.
Supported Formats
| Format | Compressed as (Mac/PC) | Compressed as (Mobile) |
|---|---|---|
| MPEG(1/2/3) | Ogg Vorbis | MP3 |
| Ogg Vorbis | Ogg Vorbis | MP3 |
| WAV | Ogg Vorbis | MP3 |
| AIFF | Ogg Vorbis | MP3 |
| MOD | - | - |
| IT | - | - |
| S3M | - | - |
| XM | - | - |
See the Sound chapter in the Creating Gameplay section of this manual for more information on using sound in Unity.
オーディオ クリップ
Audio Clip は、Audio Source によって使用されるオーディオ データです。 Unity は、モノ、ステレオおよびマルチ チャンネル (8 つまで) のオーディオ アセットをサポートしています。 Unity は、次のオーディオ ファイル形式をサポートしています。 .aif、.wav、.mp3、 .oggおよび次の トラッカー モジュール ファイル形式: .xm、.mod、.itおよび .s3m 。 トラッカー モジュール アセットは、波形プレビューをアセット インポート インスペクタにレンダリングできないこと以外は、Unity のその他のオーディオ アセットと同じ働きをします。

「オーディオ クリップ Inspector」
プロパティ
| Audio Format | ランタイム時に音声に使用される特定の形式。 |
| Native | ファイル サイズが大きくなるにつれ、品質が高くなります。 非常に短い音響効果に最適です。 |
| Compressed | ファイル サイズが小さくなるにつれ、品質が低くなるか、変わりやすくなります。 中程度の長さの音響効果や音楽に最適です。 |
| 3D Sound | 有効にすると、3D スペースで音声が再生されます。 モノとステレオの音声の両方を 3D で再生できます。 |
| Force to mono | 有効にすると、オーディオ クリップが 1 つのチャンネル音声にダウンミックスされます。 |
| Load Type | Unity がランタイムで音声をロードする方法。 |
| Decompress on load | ロード時に音声を解凍します。 オン ザ フライの解凍の性能オーバーヘッドを回避するため、より小さい圧縮音声に使用します。 ロード時の音声の解凍では、メモリ内で圧縮状態を維持する場合の 10 倍以上のメモリを使用するため、大きなファイルには使用しないでください。 |
| Compressed in memory | メモリ内で圧縮状態を維持し、再生時には解凍します。 若干の性能オーバーヘッドが生じるため (Ogg/Vorbis 圧縮ファイルの esp.)、大きいファイルにのみ使用してください。技術的な制約により、このオプションはFMODオーディオを使用するプラットフォーム上でOgg Vorbisについて”Steam From Disc”(下記参照)に切り換わることに注意してください。 |
| Stream from disc | ディスクから直接オーディオ データを流します。これは、メモリの元の音声サイズの一部を使用します。 音楽や非常に長いトラックに使用してください。 一般的に、ハードウェアに応じて、1 ~ 2 の同時ストリームに抑えてください。 |
| Compression | 「圧縮」クリップに適用される圧縮の量。 ファイル サイズに関する統計はスライダの下で確認できます。 スライダをドラッグして、再生を「十分良好」な状態にすべきですが、ファイルや配布上のニーズに見合うよう、十分小さいサイズにしてください。 |
| Hardware Decoding | (iOS のみ) iOS 機器上の圧縮オーディオに使用できます。 解凍時の CPU への負担を減らすため、Apple のハードウェア デコーダを使用します。 詳細については、プラットフォーム固有の詳細を確認してください。 |
| Gapless looping | (Android/iOS のみ) 完全ループのオーディオ ソース ファイル (非圧縮 PCM 形式) を圧縮する際に、そのループを残すために使用します。 標準の MPEG エンコーダは、ループ点周辺にサイレンスを取り込んでいますが、これはちょっとした「クリック」または「ポップ」として再生します。 Unity ではこれは円滑に扱われます。 |
オーディオ アセットのインポート
Unity は「圧縮」と「ネイティブ」オーディオの両方をサポートしています。 どのファイルも (MP3/Ogg Vorbis を除く) 最初は「ネイティブ」としてインポートされます。 ゲーム稼働中、圧縮オーディオ ファイルは CPU によって解凍される必要がありますが、ファイル サイズは小さくなります。 「Stream」にチェックを入れると、オーディオは「オン ザ フライ」で解凍されるか、そうでない場合は、オーディオはロード時に全体的に解凍されます。 ネイティブの PCM 形式 (WAV、AIFF) には CPU への負担を増やすことなく、高い忠実性があるという利点がありますが、作成されるファイルのサイズははるかに大きくなります。 モジュール ファイル (.mod、.it、.s3m..xm) は、極めて低いフットプリントで非常に高い音質を提供できます。
一般的に、「圧縮」オーディオ (またはモジュール) は、BGM や会話などの長いファイルに最適で、非圧縮オーディオは、短い音響効果により適しています。 高圧縮から始めて、圧縮スライダで圧縮の量を弱め、音質の差が著しくなる前後で適切に微調整します。
3D オーディオの使用
オーディオ クリップに「3D 音声」と表示されている場合、このクリップは、ゲームの世界の 3D スペースでの位置をシミュレートするために再生されます。 3D 音声は、音量を減らし、スピーカー間でパンすることで、音声の距離や位置をエミュレートします。 モノとマルチ チャンネルの音声の両方を 3D に配置できます。 マルチ チャンネル オーディオの場合、Audio Source の「Spread」オプションを使用して、スピーカー スペースで個々のチャンネルを拡散および分割します。 Unity は、3D スペースでのオーディオ の動作を制御および微調整するための各種オプションを提供しています。 Audio Source を参照してください。
プラットフォーム固有の詳細

iOS
携帯プラットフォーム上では、解凍時の CPU の負担を減らするため、圧縮オーディオは MP3 として符号化されます。
パフォーマンス上の理由から、オーディオ クリップは、Apple ハードウェア コーデックを使用して再生できます。 これを有効にするには、オーディオ インポータの「ハードウェア デコーディング」チェックボックスにチェックを入れます。 バックグラウンドの iPod オーディオを含む、ハードウェア オーディオ ストリームは 1 回につき、1 つしか回答できません。
ハードウェア デコーダを使用できない場合は、解凍はソフトウェア デコーダで行われます (iPhone 3GS 以降では、Apple のソフトウェア デコーダが Unity(FMOD) 自身のデコーダ上で使用されます)。

Android
携帯プラットフォーム上では、解凍時の CPU の負担を減らするため、圧縮オーディオは MP3 として符号化されます。
TrackerModules
Tracker Modules are essentially just packages of audio samples that have been modeled, arranged and sequenced programatically. The concept was introduced in the 1980's (mainly in conjunction with the Amiga computer) and has been popular since the early days of game development and demo culture.
Tracker Module files are similar to MIDI files in many ways. The tracks are scores that contain information about when to play the instruments, and at what pitch and volume and from this, the melody and rhythm of the original tune can be recreated. However, MIDI has a disadvantage in that the sounds are dependent on the sound bank available in the audio hardware, so MIDI music can sound different on different computers. In contrast, tracker modules include high quality PCM samples that ensure a similar experience regardless of the audio hardware in use.
Supported formats
Unity supports the four most common module file formats, namely Impulse Tracker (.it), Scream Tracker (.s3m), Extended Module File Format (.xm), and the original Module File Format (.mod).
Benefits of Using Tracker Modules
Tracker module files differ from mainstream PCM formats (.aif, .wav, .mp3, and .ogg) in that they can be very small without a corresponding loss of sound quality. A single sound sample can be modified in pitch and volume (and can have other effects applied), so it essentially acts as an "instrument" which can play a tune without the overhead of recording the whole tune as a sample. As a result, tracker modules lend themselves to games, where music is required but where a large file download would be a problem.
Third Party Tools and Further References
Currently, the most popular tools to create and edit Tracker Modules are MilkyTracker for OSX and OpenMPT for Windows. For more information and discussion, please see the blog post .mod in Unity from June 2010.
Page last updated: 2011-11-15Scripting
This brief introduction explains how to create and use scripts in a project. For detailed information about the Scripting API, please view the Scripting Reference. For detailed information about creating game play through scripting, please view the Creating Gameplay page of this manual.
Behaviour scripts in Unity can be written in JavaScript, C#, or Boo. It is possible to use any combination of the three languages in a single project, although there are certain restrictions in cases where one script incorporates classes defined in another script.
Creating New Scripts
Unlike other assets like Meshes or Textures, Script files can be created from within Unity. To create a new script, open the (or or ) from the main menu. This will create a new script called NewBehaviourScript and place it in the selected folder in Project View. If no folder is selected in Project View, the script will be created at the root level.

You can edit the script by double-clicking on it in the Project View. This will launch your default text editor as specified in Unity's preferences. To set the default script editor, change the drop-down item in .
These are the contents of a new, empty behaviour script:
function Update () {
}
A new, empty script does not do a lot on its own, so let's add some functionality. Change the script to read the following:
function Update () {
print("Hello World");
}
When executed, this code will print "Hello World" to the console. But there is nothing that causes the code to be executed yet. We have to attach the script to an active GameObject in the Scene before it will be executed.
Attaching scripts to objects
Save the above script and create a new object in the Scene by selecting . This will create a new GameObject called "Cube" in the current Scene.

Now drag the script from the Project View to the Cube (in the Scene or Hierarchy View, it doesn't matter). You can also select the Cube and choose . Either of these methods will attach the script to the Cube. Every script you create will appear in the menu.

If you select the Cube and look at the Inspector, you will see that the script is now visible. This means it has been attached.

Press Play to test your creation. You should see the text "Hello World" appear beside the Play/Pause/Step buttons. Exit play mode when you see it.

Manipulating the GameObject
A print() statement can be very handy when debugging your script, but it does not manipulate the GameObject it is attached to. Let's change the script to add some functionality:
function Update () {
transform.Rotate(0, 5*Time.deltaTime, 0);
}
If you're new to scripting, it's okay if this looks confusing. These are the important concepts to understand:
- function Update () {} is a container for code that Unity executes multiple times per second (once per frame).
- transform is a reference to the GameObject's Transform Component.
- Rotate() is a function contained in the Transform Component.
- The numbers in-between the commas represent the degrees of rotation around each axis of 3D space: X, Y, and Z.
- Time.deltaTime is a member of the Time class that evens out movement over one second, so the cube will rotate at the same speed no matter how many frames per second your machine is rendering. Therefore, 5 * Time.deltaTime means 5 degrees per second.
With all this in mind, we can read this code as "every frame, rotate this GameObject's Transform component a small amount so that it will equal five degrees around the Y axis each second."
You can access lots of different Components the same way as we accessed transform already. You have to add Components to the GameObject using the menu. All the Components you can access directly are listed under Variables on the GameObject Scripting Reference Page.
For more information about the relationship between GameObjects, Scripts, and Components, please jump ahead to the GameObjects page or Using Components page of this manual.
The Power of Variables
Our script so far will always rotate the Cube 5 degrees each second. We might want it to rotate a different number of degrees per second. We could change the number and save, but then we have to wait for the script to be recompiled and we have to enter Play mode before we see the results. There is a much faster way to do it. We can experiment with the speed of rotation in real-time during Play mode, and it's easy to do.
Instead of typing 5 into the Rotate() function, we will declare a speed variable and use that in the function. Change the script to the following code and save it:
var speed = 5.0;
function Update () {
transform.Rotate(0, speed*Time.deltaTime, 0);
}
Now, select the Cube and look at the Inspector. Notice how our speed variable appears.

This variable can now be modified directly in the Inspector. Select it, press and change the value. You can also right- or option-click on the value and drag the mouse up or down. You can change the variable at any time, even while the game is running.
Hit Play and try modifying the speed value. The Cube's rotation speed will change instantly. When you exit Play mode, you'll see that your changes are reverted back to their value before entering Play mode. This way you can play, adjust, and experiment to find the best value, then apply that value permanently.
The technique of changing a variable's value in the Inspector makes it easy to reuse one script on many objects, each with a different variable value. If you attach the script to multiple Cubes, and change the speed of each cube, they will all rotate at different speeds even though they use the same script.
Accessing Other Components
When writing a script Component, you can access other components on the GameObject from within that script.
Using the GameObject members
You can directly access any member of the GameObject class. You can see a list of all the GameObject class members here. If any of the indicated classes are attached to the GameObject as a Component, you can access that Component directly through the script by simply typing the member name. For example, typing transform is equivalent to gameObject.transform. The gameObject is assumed by the compiler, unless you specifically reference a different GameObject.
Typing this will be accessing the script Component that you are writing. Typing this.gameObject is referring to the GameObject that the script is attached to. You can access the same GameObject by simply typing gameObject. Logically, typing this.transform is the same as typing transform. If you want to access a Component that is not included as a GameObject member, you have to use gameObject.GetComponent() which is explained on the next page.
There are many Components that can be directly accessed in any script. For example, if you want to access the Translate function of the Transform component, you can just write transform.Translate() or gameObject.transform.Translate(). This works because all scripts are attached to a GameObject. So when you write transform you are implicitly accessing the Transform Component of the GameObject that is being scripted. To be explicit, you write gameObject.transform. There is no advantage in one method over the other, it's all a matter of preference for the scripter.
To see a list of all the Components you can access implicitly, take a look at the GameObject page in the Scripting Reference.
Using GetComponent()
There are many Components which are not referenced directly as members of the GameObject class. So you cannot access them implicitly, you have to access them explicitly. You do this by calling the GetComponent("component name") and storing a reference to the result. This is most common when you want to make a reference to another script attached to the GameObject.
Pretend you are writing Script B and you want to make a reference to Script A, which is attached to the same GameObject. You would have to use GetComponent() to make this reference. In Script B, you would simply write:
scriptA = GetComponent("ScriptA");
For more help with using GetComponent(), take a look at the GetComponent() Script Reference page.
Accessing variables in other script Components
All scripts attached to your GameObjects are Components. Therefore to get access to a public variable (and methods) in a script you make use of the GetComponent method. For example:
function Start () {
// Print the position of the transform component, for the gameObject this script is attached to
Debug.Log(gameObject.GetComponent<Transform>.().position);
}
In the previous example the GetComponent<T>. function is used to access the position property of the Transform component. The same technique can be used to access a variable in a custom script Component:
(MyClass.js)
public var speed : float = 3.14159;
(MyOtherClass.js)
function Start () {
// Print the speed variable from the MyClass script Component attached to the gameObject
Debug.Log(gameObject.GetComponent<MyClass>.().speed);
}
Accessing a variable defined in C# from Javascript
To access variables defined in C# scripts the compiled Assembly containing the C# code must exist when the Javascript code is compiled. Unity performs the compilation in different stages as described in the Script Compilation section in the Scripting Reference. If you want to create a Javascript that uses classes or variables from a C# script just place the C# script in the "Standard Assets", "Pro Standard Assets" or "Plugins" folder and the Javascript outside of these folders. The code inside the "Standard Assets", "Pro Standard Assets" or "Plugins" is compiled first and the code outside is compiled in a later step making the Types defined in the compilation step (your C# script) available to later compilation steps (your Javascript script).
In general the code inside the "Standard Assets", "Pro Standard Assets" or "Plugins" folders, regardless of the language (C#, Javascript or Boo), will be compiled first and available to scripts in subsequent compilation steps.
Optimizing variable access
In some circumstances you may be using GetComponent multiple times in your code, or multiple times per frame. Every call to GetComponent does a few extra steps internally to get the reference to the component you require. A more efficient approach is to store the reference to the component for example in your Start() function. As you will be storing the reference and not retrieving directly it is always good practice to check for null references:
(MyClass.js)
public var speed : float = 3.14159;
(MyOtherClass.js)
private var myClass : MyClass;
function Start () {
// Get a reference to the MyClass script Component attached to the gameObject
myClass = gameObject.GetComponent<MyClass>.();
}
function Update () {
// Verify that the reference is still valid and print the speed variable
if(myClass != null)
Debug.Log (myClass.speed);
}
Static Variables
It is also possible to declare variables in your classes as static. There will exist one and only one instance of a static variable for a specific class and it can be modified without the need of an instance of a class object:
(MyClass.js)
static public var speed : float = 3.14159;
(MyOtherClass.js)
function Start () {
Debug.Log (MyClass.speed);
}
It is recommended to not use static variables for object references to make sure unused objects are removed from memory.
Where to go from here
This was just a short introduction on how to use scripts inside the Editor. For more examples, check out the Unity tutorials, available for free on our Asset Store. You should also read through the Scripting Overview in the Script Reference, which contains a more thorough introduction into scripting with Unity along with pointers to more in-depth information. If you're really stuck, be sure to visit the Unity Answers or Unity Forums and ask questions there. Someone is always willing to help.
Page last updated: 2012-06-28Asset Store
Unity's Asset Store is home to a growing library of free and commercial assets created both by Unity Technologies and also members of the community. A wide variety of assets is available, covering everything from textures, models and animations to whole project examples, tutorials and Editor extensions. The assets are accessed from a simple interface built into the Unity Editor and are downloaded and imported directly into your project.
Access and Navigation
You can open the Asset Store window by selecting from the main menu. On your first visit, you will be prompted to create a free user account which you will use to access the Store subsequently.

The Asset Store front page.
The Store provides a browser-like interface which allows you to navigate either by free text search or by browsing packages and categories. To the left of the main tool bar are the familiar browsing buttons for navigating through the history of viewed items:-

To the right of these are buttons for viewing the Download Manager and for viewing the current contents of your shopping cart.

The Download Manager allows you to view the packages you have already bought and also to find and install any updates. Additionally, the standard packages supplied with Unity can be viewed and added to your project with the same interface.

The Download Manager.
Location of Downloaded Asset Files
You will rarely, if ever, need to access the files downloaded from the Asset Store directly. However, if you do need to, you can find them in
~/Library/Unity/Asset Store
...on the Mac and in
C:\Users\accountName\AppData\Roaming\Unity\Asset Store
...on Windows. These folders contain subfolders that correspond to particular Asset Store vendors - the actual asset files are contained in the appropriate subfolders.
Page last updated: 2011-12-09Asset Server
Unity Asset Server Overview
The Unity Asset Server is an asset and version control system with a graphical user interface integrated into Unity. It is meant to be used by team members working together on a project on different computers either in-person or remotely. The Asset Server is highly optimized for handling large binary assets in order to cope with large multi gigabyte project folders. When uploading assets, Import Settings and other meta data about each asset is uploaded to the asset server as well. Renaming and moving files is at the core of the system and well supported.
It is available only for Unity Pro, and is an additional license per client. To purchase an Asset Server Client License, please visit the Unity store at http://unity3d.com/store
New to Source Control?
If you have never used Source Control before, it can be a little unfriendly to get started with any versioning system. Source Control works by storing an entire collection of all your assets - meshes, textures, materials, scripts, and everything else - in a database on some kind of server. That server might be your home computer, the same one that you use to run Unity. It might be a different computer in your local network. It might be a remote machine colocated in a different part of the world. It could even be a virtual machine. There are a lot of options, but the location of the server doesn't matter at all. The important thing is that you can access it somehow over your network, and that it stores your game data.
In a way, the Asset Server functions as a backup of your Project Folder. You do not directly manipulate the contents of the Asset Server while you are developing. You make changes to your Project locally, then when you are done, you to the Project on the Server. This makes your local Project and the Asset Server Project identical.
Now, when your fellow developers make a change, the Asset Server is identical to their Project, but not yours. To synchronize your local Project, you request to . Now, whatever changes your team members have made will be downloaded from the server to your local Project.
This is the basic workflow for using the Asset Server. In addition to this basic functionality, the Asset Server allows for rollback to previous versions of assets, detailed file comparison, merging two different scripts, resolving conflicts, and recovering deleted assets.
Setting up the Asset Server
The Asset Server requires a one time server setup and a client configuration for each user. You can read about how to do that in the Asset Server Setup page.
The rest of this guide explains how to deploy, administrate, and regularly use the Asset Server.
Daily use of the Asset Server
This section explains the common tasks, workflow and best practices for using the Asset Server on a day-to-day basis.
Getting Started
If you are joining a team that has a lot of work stored on the Asset Server already, this is the quickest way to get up and running correctly. If you are starting your own project from scratch, you can skip down to the Workflow Fundamentals section.
- Create a new empty Project with no packages imported
- Go to and select as the version control mode
- From the menubar, select
- Click the button
- Enter your user name and password (provided by your Asset Server administrator)
- Click and select the desired project
- Click
- Click the tab
- Click the button
- If a conflict occurs, discard all local versions
- Wait for the update to complete
- You are ready to go
Continue reading for detailed information on how to use the Asset Server effectively every day.
Workflow Fundamentals
When using the Asset Server with a multi-person team, it is generally good practice to Update all changed assets from the server when you begin working, and Commit your changes at the end of the day, or whenever you're done working. You should also commit changes when you have made significant progress on something, even if it is in the middle of the day. Committing your changes regularly and frequently is recommended.
Understanding the Server View
The Server View is your window into the Asset Server you're connected to. You can open the Server View by selecting .

The Overview tab
The Server View is broken into tabs: Overview Update, and Commit. Overview will show you any differences between your local project and the latest version on the server with options to quickly commit local changes or download the latest updates. Update will show you the latest remote changes on the server and allow you to download them to your local project. Commit allows you to create a Changeset and commit it to the server for others to download.
Connecting to the server
Before you can use the asset server, you must connect to it. To do this you click the button, which takes you to the connection screen:

The Asset Server connection screen
Here you need to fill in:
- Server address
- Username
- Password
By clicking you can now see the available projects on the asset server, and choose which one to connect to by clicking . Note that the username and password you use can be obtain from your system administrator. Your system administrator created accounts when they installed Asset Server.
Updating from the Server
To download all updates from the server, select the tab from the Overview tab and you will see a list of the latest committed Changesets. By selecting one of these you can see what was changed in the project as well as the provided commit message. Click and you will begin downloading all Changeset updates.

The Update Tab
Committing Changes to the Server
When you have made a change to your local project and you want to store those changes on the server, you use the top tab.

The Commit tab
Now you will be able to see all the local changes made to the project since your last update, and will be able to select which changes you wish to upload to the server. You can add changes to the changeset either by manually dragging them into the changeset field, or by using the buttons placed below the commit message field. Remember to type in a commit message which will help you when you compare versions or revert to an earlier version later on, both of which are discussed below.
Resolving conflicts
With multiple people working on the same collection of data, conflicts will inevitably arise. Remember, there is no need to panic! If a conflict exists, you will be presented with the Conflict Resolution dialog when updating your project.

The Conflict Resolution screen
Here, you will be informed of each individual conflict, and be presented with different options to resolve each individual conflict. For any single conflict, you can select (which will not download that asset from the server), (which will completely overwrite your local version of the asset) or (which will ignore the changes others made to the asset and after this update you will be able to commit your local changes over server ones) for each individual conflict. Additionally, you can select for text assets like scripts to merge the server version with the local version.
Note: If you choose to discard your changes, the asset will be updated to the latest version from the server (i.e., it will incorporate other users' changes that have been made while you were working). If you want to get the asset back as it was when you started working, you should revert to the specific version that you checked out. (See Browsing revision history and reverting assets below.)
If you run into a conflict while you are committing your local changes, Unity will refuse to commit your changes and inform you that a conflict exists. To resolve the conflicts, select . Your local changes will not automatically be overwritten. At this point you will see the Conflict Resolution dialog, and can follow the instructions in the above paragraph.
Browsing revision history and reverting assets
The Asset Server retains all uploaded versions of an asset in its database, so you can revert your local version to an earlier version at any time. You can either select to restore the entire project or single files. To revert to an older version of an asset or a project, select the Overview tab then click Show History listed under Asset Server Actions. You will now see a list of all commits and be able to select and restore any file or all project to an older version.

The History dialog
Here, you can see the version number and added comments with each version of the asset or project. This is one reason why descriptive comments are helpful. Select any asset to see its history or for all changes made in project. Find revision you need. You can either select whole revision or particular asset in revision. Then click to get your local asset replaced with a copy of the selected revision. will revert entire project to selected revision.
Prior to reverting, if there are any differences between your local version and the selected server version, those changes will be lost when the local version is reverted.
If you only want to abandon the changes made to the local copy, you don't have to revert. You can discard those local modifications by selecting in the main asset server window. This will immediately download the current version of the project from the server to your local Project.
Comparing asset versions
If you're curious to see the differences between two particular versions you can explicitly compare them. To do this, open window, select revision and asset you want to compare and press . If you need to compare two different revisions of an asset - right click on it, in the context menu select then find revision you want to compare to and select it.
Note: this feature requires that you have one of supported file diff/merge tools installed. Supported tools are:
- On Windows:
- TortoiseMerge: part of TortoiseSVN or a separate download from the project site.
- WinMerge.
- SourceGear Diff/Merge.
- Perforce Merge (p4merge): part of Perforce's visual client suite (P4V).
- TkDiff.
- On Mac OS X:
- SourceGear Diff/Merge.
- FileMerge: part of Apple's XCode development tools.
- TkDiff.
- Perforce Merge (p4merge): part of Perforce's visual client suite (P4V).
Recovering deleted assets
Deleting a local asset and committing the delete to the server will in fact not delete an asset permanently. Just as any previous version of an asset can be restored through window from the Overview tab.

The History dialog
Expand item, find and select assets from the list and hit , the selected assets will be downloaded and re-added to the local project. If the folder that the asset was located in before the deletion still exists, the asset will be restored to the original location, otherwise it will be added to the root of the Assets folder in the local project.
Best Practices & Common Issues
This is a compilation of best practices and solutions to problems which will help you when using the Asset Server:
- Backup, Backup, Backup
- Maintain a backup of your database. It is very important to do this. In the unfortunate case that you have a hardware problem, a virus, a user error, etc you may loose all of your work. Therefore make sure you have a backup system in place. You can find lots of resources online for setting up backup systems.
- Stop the server before shutting the machine down
- This can prevent "fast shutdowns" from being generated in the PostgreSQL (Asset Server) log. If this occurs the Asset Server has to do a recovery due to an improper shut down. This can take a very long time if you have a large project with many commits.
- Resetting you password from Console
- You can reset your password directly from a shell, console or command line using the following command:
psql -U unitysrv -d template1 -c"alter role admin with password 'MYPASSWORD'"
- You can reset your password directly from a shell, console or command line using the following command:
- Can't connect to Asset Server
- The password may have expired. Try resetting your password.
- Also the username is case sensitive: "Admin" != "admin". Make sure you are using the correct case.
- Make sure the server is actually running:
- On OS X or Linux you can type on the terminal: ps -aux
- On Windows you can use the Task Manager.
- Verify that the Asset Server is not running on more than one computer in your Network. You could be connecting to the wrong one.
- The Asset Server doesn't work in 64-bit Linux
- The asset server can run OK on 64-bit Linux machines if you install 32-bit versions of the required packages. You can use "dpkg -i --force-architecture" to do this.
- Use the Asset Server logs to get more information
- Windows:
- \Unity\AssetServer\log
- OS X:
- /Library/UnityAssetServer/log
- Windows:
Asset Server training complete
You should now be equipped with the knowledge you need to start using the Asset Server effectively. Get to it, and don't forget the good workflow fundamentals. Commit changes often, and don't be afraid of losing anything.
Page last updated: 2011-10-31Asset Cache Server
Unity has a completely automatic asset pipeline. Whenever a source asset like a .psd or an .fbx file is modified, Unity will detect the change and automatically reimport it. The imported data from the file is subsequently stored by Unity in its own internal format. The best parts about the asset pipeline are the "hot reloading" functionality and the guarantee that all your source assets are always in sync with what you see. This feature also comes at a cost. Any asset that is modified has to be reimported right away. When working in large teams, after getting latest from Source Control, you often have to wait for a long time to re-import all the assets modified or created by other team members. Also, switching your project platform back and forth between desktop and mobile will trigger a re-import of most assets.
The time it takes to import assets can be drastically reduced by caching the imported asset data on the Cache Server.
Each asset import is cached based on
- The asset file itself
- The import settings
- Asset importer version
- The current platform.
If any of the above change, the asset gets reimported, otherwise it gets downloaded from the Cache Server.
When you enable the cache server in the preferences, you can even share asset imports across multiple projects.
Note that once the cache server is set up, this process is completely automatic, which means there are no additional workflow requirements. It will simply reduce the time it takes to import projects without getting in your way.
How to set up a Cache Server (user)
Setting up the Cache Server couldn't be easier. All you need to do is click Use Cache Server in the preferences and tell the local machine's Unity Editor where the Cache Server is.
This can be found in on the Mac or on the PC.
If you are hosting the Cache Server on your local machine, specify localhost for the server address. However, due to hard drive size limitations, it is recommended you host the Cache Server on separate machine.
How to set up a Cache Server (admin)
Admins need to set up the Cache Server machine that will host the cached assets.
You need to:
- Download the Cache Server here
- Unzip the file, after which you should see something like this:
- Depending on your operating system, run the appropriate command script.
- You will see a terminal window, indicating that the Cache Server is running in the background
The Cache Server needs to be on a reliable machine with very large storage (much larger than the size of the project itself, as there will likely be multiple versions of imported resources stored). If the hard disk becomes full the Cache Server could perform slowly.
Installing the Cache Server as a service
The provided .sh and .cmd scripts should be set-up as a service on the server.
The cache server can be safely killed and restarted at any time, since it uses atomic file operations.
Cache Server Configuration
If you simply start the Cache Server by double clicking the script, it will create a "cache" directory next to the script, and keep its data in there. The cache directory is allowed to grow to up to 50 GB. You can configure the size and the location of the data using command line options, like this:
./RunOSX.command --path ~/mycachePath --size 2000000000
--path lets you specify a cache location, and --size lets you specify the maximum cache size in bytes.
Recommendations for the machine hosting the Cache Server
We recommend equipping the machine with a lot of RAM. For best performance there is enough RAM to hold an entire imported project folder. In addition, it is best to have a machine with a fast hard drive and fast Ethernet connection. The hard drive should also have sufficient free space. On the other hand, the Cache Server has very low CPU usage.
One of the main distinctions between the Cache Server and version control is that its cached data can always be rebuilt locally. It is simply a tool for improving performance. For this reason it doesn't make sense to use a Cache Server over the Internet. If you have a distributed team, we recommend that you place a separate cache server in each location.
We recommend that you run the cache server on a Linux or Mac OS X machine. The Windows file system is not particularly well optimized for how the Asset Cache Server stores data and problems with file locking on Windows can cause issues that don't occur on Linux or Mac OS X.
Page last updated: 2012-10-26Cache Server FAQ
Will the size of my Cache Server database grow indefinitely as more and more resources get imported and stored?
The Cache Server removes assets that have not been used for a period of time automatically (of course if those assets are needed again, they will be re-created during next usage).
Does the cache server work only with the asset server?
The cache server is designed to be transparent to source/version control systems and so you are not restricted to using Unity's asset server.
What changes will cause the imported file to get regenerated?
When Unity is about to import an asset, it generates an MD5 hash of all source data.
For a texture this consists of:
- The source asset: "myTexture.psd" file
- The meta file: "myTexture.psd.meta" (Stores all importer settings)
- The internal version number of the texture importer
- A hash of version numbers of all AssetPostprocessors
If that hash is different from what is stored on the Cache Server, the asset will be reimported, otherwise the cached version will be downloaded. The client Unity editor will only pull assets from the server as they are needed - assets don't get pushed to each project as they change.
How do I work with Asset dependencies?
The Cache Server does not handle dependencies. Unity's asset pipeline does not deal with the concept of dependencies. It is built in such a way as to avoid dependencies between assets. AssetPostprocessors are a common technique used to customize the Asset importer to fit your needs. For example, you might want to add MeshColliders to some GameObjects in an fbx file based on their name or tag.
It is also easy to use AssetPostprocessors to introduce dependencies. For example you might use data from a text file next to the asset to add additional components to the imported game objects. This is not supported in the Cache Server. If you want to use the Cache Server, you will have to remove dependency on other assets in the project folder. Since the Cache Server doesn't know anything about the dependency in your postprocessor, it will not know that anything has changed thus use an old cached version of the asset.
In practice there are plenty of ways you can do asset postprocessing to work well with the cache server. You can use:
- The Path of the imported asset
- Any import settings of the asset
- The source asset itself or any data generated from it passed to you in the asset postprocessor.
Are there any issues when working with materials?
Modifying materials that already exist might cause trouble. When using the Cache Server, Unity validates that the references to materials are maintained. But since no postprocessing calls will be invoked, the contents of the material can not be changed when a model is imported through the Cache Server. Thus you might get different results when importing with or without Cache Server. It is best to never modify materials that already exist on disk.
Are there any asset types which will not be cached by the server?
There are a few kinds of asset data which the server doesn't cache. There isn't really anything to be gained by caching script files and so the server will ignore them. Also, native files used by 3D modelling software (Maya, 3D Max, etc) are converted to FBX using the application itself. Currently, the asset server caches neither the native file nor the intermediate FBX file generated in the import process. However, it is possible to benefit from the server by exporting files as FBX from the modelling software and adding those to the Unity project.
Page last updated: 2012-09-04Behind the Scenes
Unity automatically imports assets and manages various kinds of additional data about them for you. Below is a description of how this process works.
When you place an Asset such as a texture in the Assets folder, Unity will first detect that a new file has been added (the editor frequently checks the contents of the Assets folder against the list of assets it already knows about). Once a unique ID value has been assigned to the asset to enable it to be accessed internally, it will be imported and processed. The asset that you actually see in the Project panel is the result of that processing and its data contents will typically be different to those of the original asset. For example, a texture may be present in the Assets folder as a PNG file but will be converted to an internal format after import and processing.
Using an internal format for assets allows Unity to keep additional data known as metadata which enables the asset data to be handled in a much more flexible way. For example, the Photoshop file format is convenient to work with, but you wouldn't expect it to support game engine features such as mip maps. Unity's internal format, however, can add extra functionality like this to any asset type. All metadata for assets is stored in the Library folder. As as user, you should never have to alter the Library folder manually and attempting to do so may corrupt the project.
Unity allows you to create folders in the Project view to help you organize assets, and those folders will be mirrored in the actual filesystem. However, you must move the files within Unity by dragging and dropping in the Project view. If you attempt to use the filesystem/desktop to move the files then Unity will misinterpret the change (it will appear that the old asset has been deleted and a new one created in its place). This will lose information, such as links between assets and scripts in the project.
When backing up a project, you should always back up the main Unity project folder, containing both the Assets and Library folders. All the information in the subfolders is crucial to the way Unity works.
Page last updated: 2011-11-16Creating Gameplay
Unity は、ゲーム デザイナーにゲームを作成する力を与えます。 Unity の特別な点は、何年ものコード記述の経験や、面白いゲームを作るための美術の学位は不要という点です。 Unity を学ぶのに必要な基本的なワークフローのコンセプトはわずかです。 一旦理解すれば、すぐにゲーム作成に移ることができるでしょう。 ゲームの起動や実行にかかる時間w節約できるので、その分ゲームをより完璧に仕上げるために、改良やバランス調整、微調整などに時間を回すことができます。
本項では、独特で、驚くべき、楽しいゲームプレイを作成するために必要な主要コンセプトについて説明します。 このコンセプトの大半で、 Scripts を記述する必要があります。 スクリプトの作成および使用の概要については、Scripting ページを参照してください。
- Instantiating Prefabs at runtime
- Input
- Transforms
- Physics
- Adding Random Gameplay Elements
- Particle Systems
- Mecanim Animation System
- Animations (Legacy)
- Navmesh and Pathfinding (Pro only)
- Sound
- Game Interface Elements
- Networked Multiplayer
Instantiating Prefabs
By this point you should understand the concept of Prefabs at a fundamental level. They are a collection of predefined GameObjects & Components that are re-usable throughout your game. If you don't know what a Prefab is, we recommend you read the Prefabs page for a more basic introduction.
Prefabs come in very handy when you want to instantiate complicated GameObjects at runtime. The alternative to instantiating Prefabs is to create GameObjects from scratch using code. Instantiating Prefabs has many advantages over the alternative approach:
- You can instantiate a Prefab from one line of code, with complete functionality. Creating equivalent GameObjects from code takes an average of five lines of code, but likely more.
- You can set up, test, and modify the Prefab quickly and easily in the Scene and Inspector.
- You can change the Prefab being instanced without changing the code that instantiates it. A simple rocket might be altered into a super-charged rocket, and no code changes are required.
Common Scenarios
To illustrate the strength of Prefabs, let's consider some basic situations where they would come in handy:
- Building a wall out of a single "brick" Prefab by creating it several times in different positions.
- A rocket launcher instantiates a flying rocket Prefab when fired. The Prefab contains a Mesh, Rigidbody, Collider, and a child GameObject with its own trail Particle System.
- A robot exploding to many pieces. The complete, operational robot is destroyed and replaced with a wrecked robot Prefab. This Prefab would consist of the robot split into many parts, all set up with Rigidbodies and Particle Systems of their own. This technique allows you to blow up a robot into many pieces, with just one line of code, replacing one object with a Prefab.
Building a wall
This explanation will illustrate the advantages of using a Prefab vs creating objects from code.
First, lets build a brick wall from code:
// JavaScript
function Start () {
for (var y = 0; y < 5; y++) {
for (var x = 0; x < 5; x++) {
var cube = GameObject.CreatePrimitive(PrimitiveType.Cube);
cube.AddComponent(Rigidbody);
cube.transform.position = Vector3 (x, y, 0);
}
}
}
// C#
public class Instantiation : MonoBehaviour {
void Start() {
for (int y = 0; y < 5; y++) {
for (int x = 0; x < 5; x++) {
GameObject cube = GameObject.CreatePrimitive(PrimitiveType.Cube);
cube.AddComponent<Rigidbody>();
cube.transform.position = new Vector3(x, y, 0);
}
}
}
}
- To use the above script we simply save the script and drag it onto an empty GameObject.
- Create an empty GameObject with .
If you execute that code, you will see an entire brick wall is created when you enter Play Mode. There are two lines relevant to the functionality of each individual brick: the CreatePrimitive() line, and the AddComponent() line. Not so bad right now, but each of our bricks is un-textured. Every additional action to want to perform on the brick, like changing the texture, the friction, or the Rigidbody mass, is an extra line.
If you create a Prefab and perform all your setup before-hand, you use one line of code to perform the creation and setup of each brick. This relieves you from maintaining and changing a lot of code when you decide you want to make changes. With a Prefab, you just make your changes and Play. No code alterations required.
If you're using a Prefab for each individual brick, this is the code you need to create the wall.
// JavaScript
var brick : Transform;
function Start () {
for (var y = 0; y < 5; y++) {
for (var x = 0; x < 5; x++) {
Instantiate(brick, Vector3 (x, y, 0), Quaternion.identity);
}
}
}
// C#
public Transform brick;
void Start() {
for (int y = 0; y < 5; y++) {
for (int x = 0; x < 5; x++) {
Instantiate(brick, new Vector3(x, y, 0), Quaternion.identity);
}
}
}
This is not only very clean but also very reusable. There is nothing saying we are instantiating a cube or that it must contain a rigidbody. All of this is defined in the Prefab and can be quickly created in the Editor.
Now we only need to create the Prefab, which we do in the Editor. Here's how:
- Choose
- Choose
- Choose
- In the Project View, change the name of your new Prefab to "Brick"
- Drag the cube you created in the Hierarchy onto the "Brick" Prefab in the Project View
- With the Prefab created, you can safely delete the Cube from the Hierarchy ( on Windows, on Mac)
We've created our Brick Prefab, so now we have to attach it to the brick variable in our script. Select the empty GameObject that contains the script. Notice that a new variable has appeared in the Inspector, called "brick".

This variable can accept any GameObject or Prefab
Now drag the "Brick" Prefab from the Project View onto the brick variable in the Inspector. Press Play and you'll see the wall built using the Prefab.
This is a workflow pattern that can be used over and over again in Unity. In the beginning you might wonder why this is so much better, because the script creating the cube from code is only 2 lines longer.
But because you are using a Prefab now, you can adjust the Prefab in seconds. Want to change the mass of all those instances? Adjust the Rigidbody in the Prefab only once. Want to use a different Material for all the instances? Drag the Material onto the Prefab only once. Want to change friction? Use a different Physic Material in the Prefab's collider. Want to add a Particle System to all those boxes? Add a child to the Prefab only once.
Instantiating rockets & explosions
Here's how Prefabs fit into this scenario:
- A rocket launcher instantiates a rocket Prefab when the user presses fire. The Prefab contains a mesh, Rigidbody, Collider, and a child GameObject that contains a trail particle system.
- The rocket impacts and instantiates an explosion Prefab. The explosion Prefab contains a Particle System, a light that fades out over time, and a script that applies damage to surrounding GameObjects.
While it would be possible to build a rocket GameObject completely from code, adding Components manually and setting properties, it is far easier to instantiate a Prefab. You can instantiate the rocket in just one line of code, no matter how complex the rocket's Prefab is. After instantiating the Prefab you can also modify any properties of the instantiated object (e.g. you can set the velocity of the rocket's Rigidbody).
Aside from being easier to use, you can update the prefab later on. So if you are building a rocket, you don't immediately have to add a Particle trail to it. You can do that later. As soon as you add the trail as a child GameObject to the Prefab, all your instantiated rockets will have particle trails. And lastly, you can quickly tweak the properties of the rocket Prefab in the Inspector, making it far easier to fine-tune your game.
This script shows how to launch a rocket using the Instantiate() function.
// JavaScript
// Require the rocket to be a rigidbody.
// This way we the user can not assign a prefab without rigidbody
var rocket : Rigidbody;
var speed = 10.0;
function FireRocket () {
var rocketClone : Rigidbody = Instantiate(rocket, transform.position, transform.rotation);
rocketClone.velocity = transform.forward * speed;
// You can also acccess other components / scripts of the clone
rocketClone.GetComponent(MyRocketScript).DoSomething();
}
// Calls the fire method when holding down ctrl or mouse
function Update () {
if (Input.GetButtonDown("Fire1")) {
FireRocket();
}
}
// C#
// Require the rocket to be a rigidbody.
// This way we the user can not assign a prefab without rigidbody
public Rigidbody rocket;
public float speed = 10f;
void FireRocket () {
Rigidbody rocketClone = (Rigidbody) Instantiate(rocket, transform.position, transform.rotation);
rocketClone.velocity = transform.forward * speed;
// You can also acccess other components / scripts of the clone
rocketClone.GetComponent<MyRocketScript>().DoSomething();
}
// Calls the fire method when holding down ctrl or mouse
void Update () {
if (Input.GetButtonDown("Fire1")) {
FireRocket();
}
}
Replacing a character with a ragdoll or wreck
Let's say you have a fully rigged enemy character and he dies. You could simply play a death animation on the character and disable all scripts that usually handle the enemy logic. You probably have to take care of removing several scripts, adding some custom logic to make sure that no one will continue attacking the dead enemy anymore, and other cleanup tasks.
A far better approach is to immediately delete the entire character and replace it with an instantiated wrecked prefab. This gives you a lot of flexibility. You could use a different material for the dead character, attach completely different scripts, spawn a Prefab containing the object broken into many pieces to simulate a shattered enemy, or simply instantiate a Prefab containing a version of the character.
Any of these options can be achieved with a single call to Instantiate(), you just have to hook it up to the right prefab and you're set!
The important part to remember is that the wreck which you Instantiate() can be made of completely different objects than the original. For example, if you have an airplane, you would model two versions. One where the plane consists of a single GameObject with Mesh Renderer and scripts for airplane physics. By keeping the model in just one GameObject, your game will run faster since you will be able to make the model with less triangles and since it consists of fewer objects it will render faster than using many small parts. Also while your plane is happily flying around there is no reason to have it in separate parts.
To build a wrecked airplane Prefab, the typical steps are:
- Model your airplane with lots of different parts in your favorite modeler
- Create an empty Scene
- Drag the model into the empty Scene
- Add Rigidbodies to all parts, by selecting all the parts and choosing
- Add Box Colliders to all parts by selecting all the parts and choosing
- For an extra special effect, add a smoke-like Particle System as a child GameObject to each of the parts
- Now you have an airplane with multiple exploded parts, they fall to the ground by physics and will create a Particle trail due to the attached particle system. Hit Play to preview how your model reacts and do any necessary tweaks.
- Choose
- Drag the root GameObject containing all the airplane parts into the Prefab
// JavaScript
var wreck : GameObject;
// As an example, we turn the game object into a wreck after 3 seconds automatically
function Start () {
yield WaitForSeconds(3);
KillSelf();
}
// Calls the fire method when holding down ctrl or mouse
function KillSelf () {
// Instantiate the wreck game object at the same position we are at
var wreckClone = Instantiate(wreck, transform.position, transform.rotation);
// Sometimes we need to carry over some variables from this object
// to the wreck
wreckClone.GetComponent(MyScript).someVariable = GetComponent(MyScript).someVariable;
// Kill ourselves
Destroy(gameObject);
// C#
public GameObject wreck;
// As an example, we turn the game object into a wreck after 3 seconds automatically
IEnumerator Start() {
yield return new WaitForSeconds(3);
KillSelf();
}
// Calls the fire method when holding down ctrl or mouse
void KillSelf () {
// Instantiate the wreck game object at the same position we are at
GameObject wreckClone = (GameObject) Instantiate(wreck, transform.position, transform.rotation);
// Sometimes we need to carry over some variables from this object
// to the wreck
wreckClone.GetComponent<MyScript>().someVariable = GetComponent<MyScript>().someVariable;
// Kill ourselves
Destroy(gameObject);
}
}
The First Person Shooter tutorial explains how to replace a character with a ragdoll version and also synchronize limbs with the last state of the animation. You can find that tutorial on the Tutorials page.
Placing a bunch of objects in a specific pattern
Lets say you want to place a bunch of objects in a grid or circle pattern. Traditionally this would be done by either:
- Building an object completely from code. This is tedious! Entering values from a script is both slow, unintuitive and not worth the hassle.
- Make the fully rigged object, duplicate it and place it multiple times in the scene. This is tedious, and placing objects accurately in a grid is hard.
So use Instantiate() with a Prefab instead! We think you get the idea of why Prefabs are so useful in these scenarios. Here's the code necessary for these scenarios:
// JavaScript
// Instantiates a prefab in a circle
var prefab : GameObject;
var numberOfObjects = 20;
var radius = 5;
function Start () {
for (var i = 0; i < numberOfObjects; i++) {
var angle = i * Mathf.PI * 2 / numberOfObjects;
var pos = Vector3 (Mathf.Cos(angle), 0, Mathf.Sin(angle)) * radius;
Instantiate(prefab, pos, Quaternion.identity);
}
}
// C#
// Instantiates a prefab in a circle
public GameObject prefab;
public int numberOfObjects = 20;
public float radius = 5f;
void Start() {
for (int i = 0; i < numberOfObjects; i++) {
float angle = i * Mathf.PI * 2 / numberOfObjects;
Vector3 pos = new Vector3(Mathf.Cos(angle), 0, Mathf.Sin(angle)) * radius;
Instantiate(prefab, pos, Quaternion.identity);
}
}
// JavaScript
// Instantiates a prefab in a grid
var prefab : GameObject;
var gridX = 5;
var gridY = 5;
var spacing = 2.0;
function Start () {
for (var y = 0; y < gridY; y++) {
for (var x=0;x<gridX;x++) {
var pos = Vector3 (x, 0, y) * spacing;
Instantiate(prefab, pos, Quaternion.identity);
}
}
}
// C#
// Instantiates a prefab in a grid
public GameObject prefab;
public float gridX = 5f;
public float gridY = 5f;
public float spacing = 2f;
void Start() {
for (int y = 0; y < gridY; y++) {
for (int x = 0; x < gridX; x++) {
Vector3 pos = new Vector3(x, 0, y) * spacing;
Instantiate(prefab, pos, Quaternion.identity);
}
}
}
Page last updated: 2012-10-09
Input

Desktop
Note: Keyboard, joystick and gamepad input work on the desktop versions of Unity (including webplayer and Flash) but not on mobiles.
Unity supports keyboard, joystick and gamepad input.
Virtual axes and buttons can be created in the Input Manager, and end users can configure Keyboard input in a nice screen configuration dialog.

You can setup joysticks, gamepads, keyboard, and mouse, then access them all through one simple scripting interface.
From scripts, all virtual axes are accessed by their name.
Every project has the following default input axes when it's created:
- Horizontal and Vertical are mapped to w, a, s, d and the arrow keys.
- Fire1, Fire2, Fire3 are mapped to Control, Option (Alt), and Command, respectively.
- Mouse X and Mouse Y are mapped to the delta of mouse movement.
- Window Shake X and Window Shake Y is mapped to the movement of the window.
Adding new Input Axes
If you want to add new virtual axes go to the menu. Here you can also change the settings of each axis.

You map each axis to two buttons on a joystick, mouse, or keyboard keys.
| Name | The name of the string used to check this axis from a script. |
| Descriptive Name | Positive value name displayed in the input tab of the dialog for standalone builds. |
| Descriptive Negative Name | Negative value name displayed in the Input tab of the dialog for standalone builds. |
| Negative Button | The button used to push the axis in the negative direction. |
| Positive Button | The button used to push the axis in the positive direction. |
| Alt Negative Button | Alternative button used to push the axis in the negative direction. |
| Alt Positive Button | Alternative button used to push the axis in the positive direction. |
| Gravity | Speed in units per second that the axis falls toward neutral when no buttons are pressed. |
| Dead | Size of the analog dead zone. All analog device values within this range result map to neutral. |
| Sensitivity | Speed in units per second that the the axis will move toward the target value. This is for digital devices only. |
| Snap | If enabled, the axis value will reset to zero when pressing a button of the opposite direction. |
| Invert | If enabled, the Negative Buttons provide a positive value, and vice-versa. |
| Type | The type of inputs that will control this axis. |
| Axis | The axis of a connected device that will control this axis. |
| Joy Num | The connected Joystick that will control this axis. |
Use these settings to fine tune the look and feel of input. They are all documented with tooltips in the Editor as well.
Using Input Axes from Scripts
You can query the current state from a script like this:
value = Input.GetAxis ("Horizontal");
An axis has a value between -1 and 1. The neutral position is 0. This is the case for joystick input and keyboard input.
However, Mouse Delta and Window Shake Delta are how much the mouse or window moved during the last frame. This means it can be larger than 1 or smaller than -1 when the user moves the mouse quickly.
It is possible to create multiple axes with the same name. When getting the input axis, the axis with the largest absolute value will be returned. This makes it possible to assign more than one input device to one axis name. For example, create one axis for keyboard input and one axis for joystick input with the same name. If the user is using the joystick, input will come from the joystick, otherwise input will come from the keyboard. This way you don't have to consider where the input comes from when writing scripts.
Button Names
To map a key to an axis, you have to enter the key's name in the Positive Button or Negative Button property in the Inspector.
The names of keys follow this convention:
- Normal keys: "a", "b", "c" ...
- Number keys: "1", "2", "3", ...
- Arrow keys: "up", "down", "left", "right"
- Keypad keys: "[1]", "[2]", "[3]", "[+]", "[equals]"
- Modifier keys: "right shift", "left shift", "right ctrl", "left ctrl", "right alt", "left alt", "right cmd", "left cmd"
- Mouse Buttons: "mouse 0", "mouse 1", "mouse 2", ...
- Joystick Buttons (from any joystick): "joystick button 0", "joystick button 1", "joystick button 2", ...
- Joystick Buttons (from a specific joystick): "joystick 1 button 0", "joystick 1 button 1", "joystick 2 button 0", ...
- Special keys: "backspace", "tab", "return", "escape", "space", "delete", "enter", "insert", "home", "end", "page up", "page down"
- Function keys: "f1", "f2", "f3", ...
The names used to identify the keys are the same in the scripting interface and the Inspector.
value = Input.GetKey ("a");
Mobile Input
On iOS and Android, the Input class offers access to touchscreen, accelerometer and geographical/location input.
Access to keyboard on mobile devices is provided via the iOS keyboard.
Multi-Touch Screen
The iPhone and iPod Touch devices are capable of tracking up to five fingers touching the screen simultaneously. You can retrieve the status of each finger touching the screen during the last frame by accessing the Input.touches property array.
Android devices don't have a unified limit on how many fingers they track. Instead, it varies from device to device and can be anything from two-touch on older devices to five fingers on some newer devices.
Each finger touch is represented by an Input.Touch data structure:
| fingerId | The unique index for a touch. |
| position | The screen position of the touch. |
| deltaPosition | The screen position change since the last frame. |
| deltaTime | Amount of time that has passed since the last state change. |
| tapCount | The iPhone/iPad screen is able to distinguish quick finger taps by the user. This counter will let you know how many times the user has tapped the screen without moving a finger to the sides. Android devices do not count number of taps, this field is always 1. |
| phase | Describes so called "phase" or the state of the touch. It can help you determine if the touch just began, if user moved the finger or if he just lifted the finger. |
Phase can be one of the following:
| Began | A finger just touched the screen. |
| Moved | A finger moved on the screen. |
| Stationary | A finger is touching the screen but hasn't moved since the last frame. |
| Ended | A finger was lifted from the screen. This is the final phase of a touch. |
| Canceled | The system cancelled tracking for the touch, as when (for example) the user puts the device to her face or more than five touches happened simultaneously. This is the final phase of a touch. |
Following is an example script which will shoot a ray whenever the user taps on the screen:
var particle : GameObject;
function Update () {
for (var touch : Touch in Input.touches) {
if (touch.phase == TouchPhase.Began) {
// Construct a ray from the current touch coordinates
var ray = Camera.main.ScreenPointToRay (touch.position);
if (Physics.Raycast (ray)) {
// Create a particle if hit
Instantiate (particle, transform.position, transform.rotation);
}
}
}
}
Mouse Simulation
On top of native touch support Unity iOS/Android provides a mouse simulation. You can use mouse functionality from the standard Input class.
Device Orientation
Unity iOS/Android allows you to get discrete description of the device physical orientation in three-dimensional space. Detecting a change in orientation can be useful if you want to create game behaviors depending on how the user is holding the device.
You can retrieve device orientation by accessing the Input.deviceOrientation property. Orientation can be one of the following:
| Unknown | The orientation of the device cannot be determined. For example when device is rotate diagonally. |
| Portrait | The device is in portrait mode, with the device held upright and the home button at the bottom. |
| PortraitUpsideDown | The device is in portrait mode but upside down, with the device held upright and the home button at the top. |
| LandscapeLeft | The device is in landscape mode, with the device held upright and the home button on the right side. |
| LandscapeRight | The device is in landscape mode, with the device held upright and the home button on the left side. |
| FaceUp | The device is held parallel to the ground with the screen facing upwards. |
| FaceDown | The device is held parallel to the ground with the screen facing downwards. |
Accelerometer
As the mobile device moves, a built-in accelerometer reports linear acceleration changes along the three primary axes in three-dimensional space. Acceleration along each axis is reported directly by the hardware as G-force values. A value of 1.0 represents a load of about +1g along a given axis while a value of -1.0 represents -1g. If you hold the device upright (with the home button at the bottom) in front of you, the X axis is positive along the right, the Y axis is positive directly up, and the Z axis is positive pointing toward you.
You can retrieve the accelerometer value by accessing the Input.acceleration property.
The following is an example script which will move an object using the accelerometer:
var speed = 10.0;
function Update () {
var dir : Vector3 = Vector3.zero;
// we assume that the device is held parallel to the ground
// and the Home button is in the right hand
// remap the device acceleration axis to game coordinates:
// 1) XY plane of the device is mapped onto XZ plane
// 2) rotated 90 degrees around Y axis
dir.x = -Input.acceleration.y;
dir.z = Input.acceleration.x;
// clamp acceleration vector to the unit sphere
if (dir.sqrMagnitude > 1)
dir.Normalize();
// Make it move 10 meters per second instead of 10 meters per frame...
dir *= Time.deltaTime;
// Move object
transform.Translate (dir * speed);
}
Low-Pass Filter
Accelerometer readings can be jerky and noisy. Applying low-pass filtering on the signal allows you to smooth it and get rid of high frequency noise.
The following script shows you how to apply low-pass filtering to accelerometer readings:
var AccelerometerUpdateInterval : float = 1.0 / 60.0;
var LowPassKernelWidthInSeconds : float = 1.0;
private var LowPassFilterFactor : float = AccelerometerUpdateInterval / LowPassKernelWidthInSeconds; // tweakable
private var lowPassValue : Vector3 = Vector3.zero;
function Start () {
lowPassValue = Input.acceleration;
}
function LowPassFilterAccelerometer() : Vector3 {
lowPassValue = Mathf.Lerp(lowPassValue, Input.acceleration, LowPassFilterFactor);
return lowPassValue;
}
The greater the value of LowPassKernelWidthInSeconds, the slower the filtered value will converge towards the current input sample (and vice versa). You should be able to use the LowPassFilter() function instead of avgSamples().
I'd like as much precision as possible when reading the accelerometer. What should I do?
Reading the Input.acceleration variable does not equal sampling the hardware. Put simply, Unity samples the hardware at a frequency of 60Hz and stores the result into the variable. In reality, things are a little bit more complicated -- accelerometer sampling doesn't occur at consistent time intervals, if under significant CPU loads. As a result, the system might report 2 samples during one frame, then 1 sample during the next frame.
You can access all measurements executed by accelerometer during the frame. The following code will illustrate a simple average of all the accelerometer events that were collected within the last frame:
var period : float = 0.0;
var acc : Vector3 = Vector3.zero;
for (var evnt : iPhoneAccelerationEvent in iPhoneInput.accelerationEvents) {
acc += evnt.acceleration * evnt.deltaTime;
period += evnt.deltaTime;
}
if (period > 0)
acc *= 1.0/period;
return acc;
Further Reading
The Unity mobile input API is originally based on Apple's API. It may help to learn more about the native API to better understand Unity's Input API. You can find the Apple input API documentation here:
- Programming Guide: Event Handling (Apple iPhone SDK documentation)
- UITouch Class Reference (Apple iOS SDK documentation)
Note: The above links reference your locally installed iPhone SDK Reference Documentation and will contain native ObjectiveC code. It is not necessary to understand these documents for using Unity on mobile devices, but may be helpful to some!

iOS
Device geographical location
Device geographical location can be obtained via the iPhoneInput.lastLocation property. Before calling this property you should start location service updates using iPhoneSettings.StartLocationServiceUpdates() and check the service status via iPhoneSettings.locationServiceStatus. See the scripting reference for details.
Transforms
Transforms are a key Component in every GameObject. They dictate where the GameObject is positioned, how it is rotated, and its scale. It is impossible to have a GameObject without a Transform. You can adjust the Transform of any GameObject from the Scene View, the Inspector, or through Scripting.
The remainder of this page's text is from the Transform Component Reference page.
トランスフォーム
Transform Componentは、シーン内のすべてのオブジェクトの実際の「Position」、「Rotation」および「Scale」を決定します。 オブジェクトはそれぞれトランスフォームを持ちます。

「Inspector 表示・編集可能なトランスフォーム コンポーネント」
プロパティ
| Position | X、Y、Z 座標でのトランスフォームの位置。 |
| Rotation | X、Y、Z 軸周辺でのトランスフォームの回転 (単位:度)。 |
| Scale | X、Y、Z 軸に沿ったトランスフォームのスケール。1の場合は、元の大きさになります (オブジェクトがインポートされた大きさ)。 |
トランスフォームのプロパティはすべて、トランスフォームの親に対して相対的に設定されます(詳細は以下で参照下さい)。 トランスフォームに親がない場合、プロパティはワールド空間にもとづき設定されます。
トランスフォームの使用
トランスフォームは、X、Y、Z軸を使用して、3D 空間で操作されます。Unity では、これらの軸は、それぞれ赤色、緑色、青色で表示されます。 XYZ = RGBと覚えてださい。

「3 つの軸間の色記号とトランスフォーム プロパティの関係」
トランスフォーム コンポーネントは、Scene View またはインスペクタタ上のプロパティを編集して、直接操作できます。シーンでは、Move、Rotate およびScale ツールを使用して、トランスフォームを操作できます。 これらのツールは、Unity エディタの左上にあります。

「View、Translat、Rotate および Scale ツール」
これらのツールは、シーン内のどのオブジェクトにも使用できます。 オブジェクトをクリックすると、そこにツールのギズモが表示されます。 現在どのツールを選択しているかにより、このギズモの表示が若干異なります。

3 つのギズモはすべてシーン ビューで直接編集できます。
トランスフォームを操作するには、3 つのギズモの軸のいずれかをクリックして、ドラッグすると、色が変わります。 マウスをドラッグするのに合わせ、オブジェクトが軸に沿って、移動、回転または縮小拡大します。 マウス ボタンを放すと、軸が選択されたままになります。 マウスの中ボタンをクリックし、マウスをドラッグして、選択した軸に沿って、トランスフォームを操作できます。

「個々の軸をクリックすると、選択されます」
親子関係
親子関係は、Unity を使用する際に理解する必要のある最も重要なコンセプトのひとつです。 GameObject が別の GameObject の Parent(親)の場合、Child(子)GameObject は、親とまったく同じように移動、回転、縮小拡大します。 体に付いている腕のように、体を回転させると、体に付いているため、腕も動きます。 どのオブジェクトにも複数の子がありますが、親は 1 つだけです。
Hierarchy View 内の GameObject を別の GameObject にドラッグすることで、親を作成できます。 これにより、2 つの GameObject 間で親子関係が作成されます。

「親子階層の例。 左の矢印で示されるすべての GameObject は親です。」
上記の例では、体が腕の親になっており、腕は手の親になっています。 Unity で作成したシーンには、これらの Transform hierarchy の集合が含まれます。 一番上の親オブジェクトは、Root object と呼ばれます。 親を移動、縮小拡大または回転させると、そのトランスフォームでの変更はすべてその子にも適用されます。
子 GameObject のインスペクタでのトランスフォームの値は、親のトランスフォームの値に対し相対的に表示されるということはポイントです。これらは Local Coordinate(ローカル座標)と呼ばれます。 スクリプティングを通じて、ローカル座標の他、 Global Coordinate(グローバル座標)にもアクセスできます。
いくつかの別々のオブジェクトに親子関係を持たせることにより、人体模型の骸骨の構造のように、複合的なオブジェクトを作成できます。 また、シンプルな構造でも便利な効果得られます。例えば、舞台設定が夜中であるホラー ゲームで、懐中電灯を活用した場面を作りたいとします。このオブジェクトを作成するには、spotlightトランスフォームを懐中電灯トランスフォームの親にします。 これで、懐中電灯トランスフォームに変更を加えると、spotlightも変更され、リアルな懐中電灯の効果を生み出すことができます。
不均等なScaleによるパフォーマンス問題や制限
不均等なScaleとは、トランスフォームにおけるScaleがx、y、z方向で異なる値があるケースです(例:(2, 4, 2))。対比となるのが均等であるScaleで、x、y、z方向で同じ値があるケースです(例:(3, 3, 3))。均等でないScaleは限定されたケースでは便利かもしれませんが、通常は出来るかぎり避けるべきです。
不均等なScaleはレンダリングのパフォーマンスにマイナスのインパクトがあります。頂点法線を正しく変換するため、CPU上でメッシュを変換しデータの複製をします。通常はインスタンス間で共有されるメッシュはグラフィックスメモリに保持しますが、このケースではCPUとメモリ双方のコストがインスタンスごとに発生します。
Unityが不均等なScaleを扱う場合、特定の制限事項もあります。
- 特定のコンポーネントは不均等なScaleを完全にサポートしていません。例えば、radiusプロパティがある場合、あるいは似たケースでSphere Collider、Capsule Collider、 Light、Audio Source、等で不均等なScaleにしていたとしても形状は楕円にならず、円状/球状となります。
- 子オブジェクトが不均等なScaleの親を持ち、親に対して相対的に回転した場合、それは非直交配列になり、すなわち歪んで表示されることがあります。不均等なScaleをサポートするコンポーネントについても、非直行配列はサポートしていません。例えば、 Box Colliderを歪ませることはできないため、不均等なScaleとした場合Box Colliderはレンダリングされたメッシュと正確にマッチしなくなります。
- パフォーマンスによる理由で、子オブジェクトがが不均等なScaleの親を持った場合、Scale/Matrixが回転時に自動反映されなくなります。Scaleの更新によっては、はじけるような動作となり、たとえばオブジェクトを親から切り離した場合に発生します。
スケールの重要性
トランスフォームのスケールは、モデリング アプリケーションのメッシュのサイズと、Unity でのメッシュのサイズ間の差分を決定します。 Unity でのメッシュのサイズ (およびトランスフォームのスケール)は、特に物理挙動のシミュレーション中には非常に重要になります。次の3つの要因によって、オブジェクトのスケールが決まります。
- 3D モデリング アプリケーションでのメッシュのサイズ。
- オブジェクトの Import Settings での「Mesh Scale Factor」設定。
- トランスフォーム コンポーネントの「Scale」値。
理想的には、トランスフォームコンポーネントでのオブジェクトの「スケール」を調整する必要はありません。 実際のスケールでモデルを作成するため、トランスフォームのスケールを変更する必要がないというのが最高の選択肢です。 個々のメッシュに対して、Import Settings にメッシュをインポートしたスケールを調整するのが、次に良い選択肢になります。インポート サイズに基づいて、一定の最適化が行われ、調整されたスケール値を持つオブジェクトをインスタンス化すると、パフォーマンスが下がる場合があります。詳細については、Rigidbody コンポーネントのスケールの最適化に関する項目を参照してください。
ヒント
- トランスフォームのパレンディングを行う際、子を適用する前に、親の位置を <0,0,0> に置きます。 これにより、後で悩むケースを多々、未然防止できます。
- Particle System は、トランスフォームの「Scale」の影響は受けません。 パーティクルシステムの縮小拡大を行うには、システムのパーティクルエミッタ、アニメータ、およびレンダラのプロパティを修正する必要があります。
- 物理挙動のシミュレーションに Rigidbody を使用する場合は、Rigidbody ページに記載されている Scale プロパティに関する重要な情報を参照してください。
- から、トランスフォームの軸の色 (およびその他の UI 要素)を変更できます。
- Unityでのスケーリングを回避出来る場合は、そうしてください。 オブジェクトのスケールを 3D モデリング アプリケーション、またはメッシュの Import Settings で完結するようにして下さい。
Physics
Unity has NVIDIA PhysX physics engine built-in. This allows for unique emergent behaviour and has many useful features.
Basics
To put an object under physics control, simply add a Rigidbody to it. When you do this, the object will be affected by gravity, and can collide with other objects in the world.
Rigidbodies
Rigidbodies are physically simulated objects. You use Rigidbodies for things that the player can push around, for example crates or loose objects, or you can move Rigidbodies around directly by adding forces to it by scripting.
If you move the Transform of a non-Kinematic Rigidbody directly it may not collide correctly with other objects. Instead you should move a Rigidbody by applying forces and torque to it. You can also add Joints to rigidbodies to make the behavior more complex. For example, you could make a physical door or a crane with a swinging chain.
You also use Rigidbodies to bring vehicles to life, for example you can make cars using a Rigidbody, 4 Wheel Colliders and a script applying wheel forces based on the user's Input.
You can make airplanes by applying forces to the Rigidbody from a script. Or you can create special vehicles or robots by adding various Joints and applying forces via scripting.
Rigidbodies are most often used in combination with primitive colliders.
Tips:
- You should never have a parent and child rigidbody together
- You should never scale the parent of a rigidbody
Kinematic Rigidbodies
A Kinematic Rigidbody is a Rigidbody that has the isKinematic option enabled. Kinematic Rigidbodies are not affected by forces, gravity or collisions. They are driven explicitly by setting the position and rotation of the Transform or animating them, yet they can interact with other non-Kinematic Rigidbodies.
Kinematic Rigidbodies correctly wake up other Rigidbodies when they collide with them, and they apply friction to Rigidbodies placed on top of them.
These are a few example uses for Kinematic Rigidbodies:
- Sometimes you want an object to be under physics control but in another situation to be controlled explicitly from a script or animation. For example you could make an animated character whose bones have Rigidbodies attached that are connected with joints for use as a Ragdoll. Most of the time the character is under animation control, thus you make the Rigidbody Kinematic. But when he gets hit you want him to turn into a Ragdoll and be affected by physics. To accomplish this, you simply disable the isKinematic property.
- Sometimes you want a moving object that can push other objects yet not be pushed itself. For example if you have an animated platform and you want to place some Rigidbody boxes on top, you should make the platform a Kinematic Rigidbody instead of just a Collider without a Rigidbody.
- You might want to have a Kinematic Rigidbody that is animated and have a real Rigidbody follow it using one of the available Joints.
Static Colliders
A Static Collider is a GameObject that has a Collider but not a Rigidbody. Static Colliders are used for level geometry which always stays at the same place and never moves around. You can add a Mesh Collider to your already existing graphical meshes (even better use the Generate Colliders check box), or you can use one of the other Collider types.
You should never move a Static Collider on a frame by frame basis. Moving Static Colliders will cause an internal recomputation in PhysX that is quite expensive and which will result in a big drop in performance. On top of that the behaviour of waking up other Rigidbodies based on a Static Collider is undefined, and moving Static Colliders will not apply friction to Rigidbodies that touch it. Instead, Colliders that move should always be Kinematic Rigidbodies.
Character Controllers
You use Character Controllers if you want to make a humanoid character. This could be the main character in a third person platformer, FPS shooter or any enemy characters.
These Controllers don't follow the rules of physics since it will not feel right (in Doom you run 90 miles per hour, come to halt in one frame and turn on a dime). Instead, a Character Controller performs collision detection to make sure your characters can slide along walls, walk up and down stairs, etc.
Character Controllers are not affected by forces but they can push Rigidbodies by applying forces to them from a script. Usually, all humanoid characters are implemented using Character Controllers.
Character Controllers are inherently unphysical, thus if you want to apply real physics - Swing on ropes, get pushed by big rocks - to your character you have to use a Rigidbody, this will let you use joints and forces on your character. Character Controllers are always aligned along the Y axis, so you also need to use a Rigidbody if your character needs to be able to change orientation in space (for example under a changing gravity). However, be aware that tuning a Rigidbody to feel right for a character is hard due to the unphysical way in which game characters are expected to behave. Another difference is that Character Controllers can slide smoothly over steps of a specified height, while Rigidbodies will not.
If you parent a Character Controller with a Rigidbody you will get a "Joint" like behavior.
リジッドボディ
Rigidbody により、 GameObject が物理特性の制御下で動作するようになります。 リジッドボディは、力やトルクを受け、現実的な方向にオブジェクトを動かすことができます。 GameObject は、重力の影響を影響を受けるリジッドボディを含めるか、スクリプティングを通じて加えた力の下で動作するか、NVIDIA PhysX 物理特性エンジンを通じて、その他のオブジェクトと相互作用する必要があります。

「リジッドボディにより、GameObject は物理的影響の下で動作できます」
プロパティ
| Mass | オブジェクトの質量 (単位 : kg)。 質量をその他のリジッドボディの 100 倍 にしておくことをお勧めします。 | |
| Drag | 力により動く際に、オブジェクトに影響する空気抵抗の量。 0 の場合、空気抵抗が 0 で、無限の場合、オブジェクトは直ちに動きを止めます。 | |
| Angular Drag | トルクにより回転する際に、オブジェクトに影響する空気抵抗の量。 0 の場合、空気抵抗が 0 で、無限の場合、オブジェクトは直ちに回転を止めます。 | |
| Use Gravity | 有効にすると、オブジェクトは重力の影響を受けます。 | |
| Is Kinematic | 有効にすると、オブジェクトは物理特性エンジンによって駆動されませんが、その Transform によってのみ操作できます。 これは、プラットフォームを移したい場合や、HingeJoint を追加したリジッドボディをアニメート化したい場合に便利です。 | |
| Interpolate | リジッドボディの移動でギクシャクした動きを求めている場合にのみこのオプションのいずれかを試します。 | |
| None | 補間は適用されません。 | |
| Interpolate | 前のフレームのトランスフォームに基づいて、トランスフォームを円滑にします。 | |
| Extrapolate | 次のフレームの推定トランスフォームに基づいて、トランスフォームを円滑にします。 | |
| Freeze Rotation | 有効にすると、GameObject はスクリプトを通じて追加される衝突または力に基づいて回転しません。「transform.Rotate()」を使用した場合のみ回転します。 | |
| Collision Detection | 高速で移動するオブジェクトが、衝突を検出せずに、他のオブジェクトを通過させないようにする場合に使用します。 | |
| Discrete | シーン内のその他すべてのコライダに対して、個別の衝突検出を使用します。 その他のコライダは、衝突のテスト時に個別衝突検出を使用します。 通常の衝突に使用されます (これはデフォルト値です)。 | |
| Continuous | 動的衝突 (リジッドボディとの) に対する個別衝突検出と、スタティックな MeshColliders との連続衝突検出 (リジッドボディなし) を使用します。 Rigidbodies set to Continuous Dynamic に設定されたリジッドボディは、このリジッドボディへの衝突をテストする際に、連続衝突検出を使用します。 その他のリジッドボディは、個別衝突検出を使用します。 連続衝突検出が衝突を必要とするオブジェクトに使用されます。 (高速のオブジェクトの衝突に関して問題がない場合は、これは、物理特性パフォーマンスに大きく影響するため、個別に設定しておきます) | |
| Continuous Dynamic | 連続および連続動的衝突に設定されたオブジェクトに対して、連続衝突検出を使用します。 スタティックな MeshColliders との連続衝突検出も使用します (リジッドボディなし)。 その他すべてのコライダに対しては、個別衝突検出を使用します。 高速移動するオブジェクトに使用されます。 | |
| Constraints | リジッドボディの動きに関する制限:- | |
| Freeze Position | 是界の X、Y、Z 軸で移動するリジッドボディを選択的に停止します。 | |
| Freeze Rotation | 是界の X、Y、Z 軸で回転するリジッドボディを選択的に停止します。 |
詳細
Rigidbody により、GameObject が物理特性エンジンの制御下で動作するようになります。 これにより、現実的な衝突、多様な種類のジョイント、その他のクールな動作へのゲートウェイが開きます。 リジッドボディに力を加えることで、GameObject を操作することによって、 Component を直接調整した場合とは違うルック & フィールを作成します。 一般に、リジッドボディと同じ GameObject のトランスフォームのどちらか一方だけを操作しないでください。
トランスフォームの操作とリジッドボディ間の最大の差は、力を使用するかしないかです。 リジッドボディは、力やトルクを受けることができますが、トランスフォームはできません。 トランスフォームは移動や回転はできますが、物理特性の使用とは異なります。 自分で試した場合は、その顕著な差に気づくでしょう。 リジッドボディに力/トルクを加えると、実際にオブジェクトの位置やトランスフォーム コンポーネントの回転を変更します。 このため、どちら一方だけを使用する必要があります。 物理特性使用中にトランスフォームを変更すると、衝突やその他の計算に問題が生じる場合があります。
リジッドボディは、物理特性エンジンに影響を受ける前に、GameObject に明示的に追加する必要があります。 メニューバーで「Components->Physics->Rigidbody」から選択したオブジェクトにリジッドボディを追加できます。 オブジェクトで物理特性の準備ができました。重力下に置かれ、スクリプティングを介して、力を受けることができますが、 Collider またはジョイントを追加して、正確に希望通りに動作させる必要があります。
親子関係
オブジェクトが物理特性の制御下にある場合、トランスフォームの親が移動する方法から半分独立して移動します。 親を移動すると、リジッドボディの子をそれに沿って引っ張ります。 しかし、リジッドボディは重力および衝突イベントへの対応により、落下していきます。
スクリプティング
リジッドボディをコントロールするため、最初にスクリプトを使用して、力またはトルクを追加します。 「AddForce() と「AddTorque() 」をオブジェクトのリジッドボディで呼び出すことでこれを行います。 物理特性を使用する際は、オブジェクトのトランスフォームを直接買えないようにしてください。
アニメーション
一部の状況で、主にラグドール効果を作成する場合に、アニメーションと物理特性間でオブジェクトのコントロールを切り替える必要があります。 このため、リジッドボディには、「isKinematic 」と付けることができます。 リジッドボディに「isKinematic」と付いている場合、衝突や力、physX のその他の部分の影響を受けません。 This means that you will have to control the object by manipulating the Transform コンポーネントを直接操作することで、オブジェクトをコントロールする必要があるということです。 キネマティック リジッドボディはその他のオブジェクトに影響しますが、これら自体は物理特性の影響を受けません。 例えば、キネマティック オブジェクトに追加されるジョイントは、そこに追加されたその他のリジッドボディを制約し、キネマティック リジッドボディは衝突を通じて、その他のリジッドボディに影響します。
コライダ
コライダは、衝突を発生させるために、リジッドボディと共に追加する必要のある別の種類のコンポーネントです。 2 つのリジッドボディが互いに衝突する場合、物理特性エンジンは、両方のオブジェクトもコライダを追加するまで、衝突を計算しません。 コライダのないリジッドボディは、物理特性シミュレーション中に互いを簡単に通過します。

「コライダはリジッドボディの物理特性の境界を定義します」
「Component->Physics」メニューでコライダを追加します。 詳細については、個々のコライダのコンポーネント リファレンス ページを参照してください。
- Box Collider - キューブのプリミティブの形状
- Sphere Collider - 球体のプリミティブの形状
- Capsule Collider - カプセルのプリミティブの形状
- Mesh Collider - オブジェクトのメッシュからコライダを作成し、別のメッシュ コライダとは衝突できません
- Wheel Collider - 車両またはその他の移動する乗り物の作成用
複合コライダ
複合コライダはプリミティブなコライダの組み合わせによりひとつのコライダとしての挙動を示すものです。便利な場面としては複雑なメッシュをコライダでしようしたいが、Mesh Colliderを使用できないケースです。複合コライダを作成する際はコライダオブジェクトの子オブジェクトを作成し、それから各々の子オブジェクトにプリミティブなコライダを追加します。これにより各々のコライダを別々に容易に配置、回転、拡大縮小することが出来ます。

リアルリティのある複合コライダ
上記の図では、ガンモデルのゲームオブジェクトはリジッドボディがアタッチされてあり、子オブジェクトして複数のプリミティブなコライダを含みます。親のリジッドボディが力により動かされた場合、子コライダが追従して動きます。プリミティブなコライダは環境上にあるMesh Colliderと衝突し、親リジッドボディは自身に加えられた力の作用、子コライダがシーン上他のコライダとの衝突した作用、の双方を加味して軌道が変化します。
Mesh Collider同士は通常では衝突しませんが、Convexをオンにした場合のみ衝突することが出来ます。良くある方法として、動く全てのオブジェクトにはプリミティブなコライダを組み合わせ、動かない背景のオブジェクトにMesh Colliderを使います。
連続衝突検出
連続衝突検出は、高速移動するコライダが互いに通過しないようにする機能です。 これは、通常の(「Discrete」) 衝突検出使用時、オブジェクトが 1 つのフレームでコライダの片側にあり、次のフレームでコライダを通過している場合に発生することがあります。 これを解決するには、高速移動するオブジェクトのリジッドボディで連続衝突検出を有効にできます。 衝突検出モードを「Continuous」に切り替え、リジッドボディがスタティックな (つまり、非リジッドボディ) MeshColliders を通過させないようにします。 衝突検出モードを「Continuous Dynamic」に切り替え、リジッドボディが、衝突検出モードを「Continuous」または「Continuous Dynamic」に設定したその他のサポートされているリジッドボディを通過させないようにします。 連続衝突検出は、Box-、Sphere- および CapsuleCollider でサポートされています。
正しいサイズの使用
GameObject のメッシュのサイズは、リジッドボディの質量よりもはるかに重要です。 リジッドボディが期待通りに動作しなていない場合、ゆっくり移動するか、浮くか、正しく衝突しません。 Unity のデフォルトの単位スケールは、1 単位 = 1 メートルなので、インポートされたメッシュのスケールは維持され、物理特性計算に適用されます。 例えば、倒壊しかけている高層ビルは、積み木で作った塔とはかなり違う形で崩れるため、正確にスケールするには、サイズの異なるオブジェクトをモデル化する必要があります。
人間をモデリングしている場合、Unity では、その人間の身長は約 2メートルになります。 オブジェクトが正しいサイズかどうかを確認するには、デフォルトのキューブと比較します。 を使用して、キューブを新規作成します。 キューブの高さは、ちょうど 1 メートルになるため、作成している人間は 2 倍の慎重になります。
メッシュ自体を調整できない場合、Project View で選択し、メニューバーから を選択することで、特定のメッシュ アセットの均一なスケールを変更できます。 ここでは、スケールを変更し、メッシュを再インポートできます。
ゲームで、GameObject を異なるスケールでインスタンス化する必要がある場合、トランスフォームのスケール軸の値を調整しても大丈夫です。 欠点は、物理特性シミュレーションは、オブジェクトのインスタンス化時に更に多くの作業を剃る必要があり、ゲーム内でパフォーマンスの低下を引き起こす可能性があります。 これは大きな損失ではありませんが、他の 2 つのオプションでスケールを仕上げることほど効率的ではありません。 不均一なスケールだと、パレンディング使用時に望まぬ動作を生じる場合があります。 このため、モデリング アプリケーションで正しいスケールでオブジェクトを作成するのが常に最適です。
ヒント
- 2 つのリジッドボディの相対的な「Mass」は、リジッドボディが互いに衝突する際に、どのように反応するかを決定します。
- 1 つのリジッドボディの「Mass」を他方より大きくしても、自由落下での落下速度は上がりません。 これを行うには、「Drag」を使用します。
- 「Drag」値が低いと、オブジェクトが重く見えるようになります。 この値が高いと、軽く見えます。 「Drag」の通常の値は、.001 (金属の塊) と 10 (羽) の間です。
- オブジェクトのトランスフォーム コンポーネントを直接操作しているが、物理特性が必要な場合、リジッドボディを追加し、キネマティックにします。
- トランスフォーム コンポーネントを通じて、GameObject を移動させているが、衝突/トリガー メッセージを受信したい場合は、リジッドボディを移動しているオブジェクトに追加する必要があります。
一定力
Constant Force は、Rigidbody に一定力を素早く加えるためのユーティリティです。 これは、高い速度で始めたくないが、代わりに加速していく場合に、ロケットのような 1 回限りのオブジェクトに最適です。

一定の力で前進するロケット
プロパティ
| Force | ワールド空間で適用される力のベクトル。 |
| Relative Force | オブジェクトのローカル空間で適用される力のベクトル。 |
| Torque | ワールド空間で適用されるトルクのベクトル。 オブジェクトが、このベクトル周辺で回転し始めます。 ベクトルが長いほど、回転が速くなります。 |
| Relative Torque | ローカル空間で適用されるトルクのベクトル。 オブジェクトが、このベクトル周辺で回転し始めます。 ベクトルが長いほど、回転が速くなります。 |
詳細
加速しながら前進するロケットを作成するには、Relative Forceを正の Z 軸に沿って設定します。 次に、剛体のDragプロパティを使用して、一定の最大速度を超えないようにします (ドラッグが高くなるほど、最大速度が下がります)。 剛体で、ロケットが常に軌道上にあるよう、重力をオフにします。
ヒント
- オブジェクトを上に流すには、正の Y 値のあるForceプロパティを持つ一定力を追加します。
- オブジェクトを上に飛ばすには、正の Z 値のあるRelative Forceプロパティを持つ一定力を追加します。
球体コライダ
Sphere Collider は、基本的な球体形状の衝突プリミティブです。

球体コライダの積み重ね
プロパティ
| Material | 使用する物理マテリアル への参照。物理マテリアルによりコライダが他と衝突したときの物理挙動の条件が定義されます |
| Is Trigger | オンにすると、コライダはイベントのトリガーとなり、物理エンジンが無視されます |
| Radius | コライダのサイズ。 |
| Center | オブジェクトのローカル空間でのコライダの位置。 |
詳細
球体コライダは、均一なスケールにサイズ変更できますが、個々の軸に沿ってサイズ変更はされません。 落下する医師や、卓球の球、ビー玉などに最適です。

標準の球体コライダ
コライダはリジッドボディ(剛体)と連動してUnity上での物理挙動を実現します。リジッドボディというのは「オブジェクトを物理法則にしたがって動かす」一方で、コライダはオブジェクトが互いに衝突することを可能に」します。コライダはリッジトボディとは別にオブジェクトにアタッチする必要があります。コライダとリジットボディは両方アタッチされなくとも良いですが、コリジョンの結果オブジェクトを動かすためにはリジッドボディがアタッチされていることは必須です。
二つのコライダ間で衝突が発生し、かつ少なくとも一つのオブジェクトにリジッドボディがアタッチされている場合、 3種類の 衝突 メッセージが 発行されます。これらのイベントはスクリプト作成時にハンドリングすることが出来ますし、独自のビヘイビアを定義することが出来るようになるとともにビルトインのNVIDIA PhysXエンジンを使用するかしないか自由に決めることができます。
トリガー
コライダをトリガーとしてマーキングする別の方法としてインスペクタでIsTrigger属性をチェックボックスにてオンするという方法があります。これによりトリガーは物理エンジンから無視されるようになり、コライダの衝突が発生した場合3種類の トリガー メッセージが 発行されます。トリガーはゲームにおける別イベントを発動するに際して便利です(カットシーンやドアの自動開閉、チュートリアルメッセージの表示、等々。想像を働かせれば様々なパターンで応用できます)
留意しないといけないこととして、二つのトリガーがトリガーイベント衝突時にを発行するためには、片方のトリガーはリジッドボディを含む必要があります。同様にトリガーが通常のコライダと衝突するためには片方のトリガーがリジッドボディを含む必要があります。全てのコライダ種類の一覧についていは図表にまとめてありますので、本項の「応用」セクションの「衝突アクションマトリクス」を参照下さい。
摩擦係数と反射係数
摩擦係数(friction)、反射係数(bouncyness)、やわらかさ(softness)は 物理マテリアル の属性として定義されています。スタンダードアセット のなかに頻繁に使用される物理マテリアルが含まれています。使用する際はPhysic Materialドロップダウンボックスをクリックし、どれかを選択します(例 Ice)。物理マテリアル については自分でカスタムのものを作成することができ、摩擦係数などすべて調整することが出来ます。
合成物コライダ
複合コライダはプリミティブなコライダの組み合わせによりひとつのコライダとしての挙動を示すものです。便利な場面としては複雑なメッシュをコライダでしようしたいが、Mesh Colliderを使用できないケースです。複合コライダを作成する際はコライダオブジェクトの子オブジェクトを作成し、それから各々の子オブジェクトにプリミティブなコライダを追加します。これにより各々のコライダを別々に容易に配置、回転、拡大縮小することが出来ます。

リアルリティのある複合コライダ
上記の図では、ガンモデルのゲームオブジェクトはリジッドボディがアタッチされてあり、子オブジェクトして複数のプリミティブなコライダを含みます。親のリジッドボディが力により動かされた場合、子コライダが追従して動きます。プリミティブなコライダは環境上にあるMesh Colliderと衝突し、親リジッドボディは自身に加えられた力の作用、子コライダがシーン上他のコライダとの衝突した作用、の双方を加味して軌道が変化します。
Mesh Collider同士は通常では衝突しませんが、Convexをオンにした場合のみ衝突することが出来ます。良くある方法として、動く全てのオブジェクトにはプリミティブなコライダを組み合わせ、動かない背景のオブジェクトにMesh Colliderを使います。
ヒント
- 複数のコライダをオブジェクトに追加するためには、子ゲームオブジェクトを作成し、コライダを各々にアタッチします。この方法でコライダは独立して動作させることが出来ます。
- シーンビューのギズモをみることでオブジェクトとコライダの大きさを確かめることができます
- コライダはオブジェクトと大きさを出来るかぎりマッチさせるのが良いです。均等でない大きさ(各々の軸方向での大きさが異なる)の場合、メッシュコライダでないかぎりオブジェクトと完全にマッチさせることは出来ません
- トランスフォームコンポーネントを通してオブジェクトを移動させつつ、衝突、トリガーのメッセージを受け取るためには動いているオブジェクトにリジッドボディをアタッチする必要があります。
- 爆発を作成したい場合、リジッドボディが衝突する壁から若干押し出すためには、何度もドラッグして、リジッドボディと球体コライダをそこに追加すると非常に有効な場合があります。
応用
コライダの組み合わせ
Unity上で複数の異なる衝突の組み合わせがありえる。それぞれのゲームで追求する内容は異なるので、ゲームのタイプによって組み合わせの良し悪しが決まってくる。ゲームで物理挙動を使用している場合、基本的なコライダの種類を理解し、主な働き、使用例、他のオブジェクトとの相互作用について理解を深める必要がある。
スタティックコライダ
リジッドボディを含まないが、コライダを含むゲームオブジェクトについて考えます。これらのオブジェクトは動かない、あるいは動くとしてもわずかであることが望ましいです。これらは背景のオブジェクトとして最適である。リジッドボディと衝突したとしても動きません。
リジッドボディコライダ
リジッドボディとコライダ双方を含むゲームオブジェクトについて考えます。これらのオブジェクトは物理エンジンに影響を受け、加えられた力や衝突によって軌道が変化します。またコライダを含むゲームオブジェクトと衝突させることが出来ます。多くの場合は物理エンジンを使用したい場合の主要なコライダとなります。
キネマティック リジッドボディコライダ
IsKinematicがオンとなっているリジッドボディとコライダ双方を含みゲームオブジェクトについて考えます。このオブジェクトを動かすためには力を加えるのではなくトランスフォーム コンポーネントの値を書き換えて移動させます。スタティックコライダを共通点が多いがコライダを頻繁に動かしたい場合に役立ちます。その他、このオブジェクトを使用するのが適切なシナリオはいくつか考えられます。
このオブジェクトはスタティックコライダにトリガーイベントを発行したい場合に役立つ。トリガーはリジッドボディを含む必要があるためリジッドボディをアタッチしたうえでIsKinematicをオンに設定する。これによりオブジェクトが物理挙動の影響を受けず、必要なときにトリガーイベントを受け取ることが出来るようになる。
キネマティック リジッドボディは簡単にオンオフを切り替えることが出来る。これはラグドール作成に大いに役立ち、たとえばキャクラターがある場面まではアニメーションどおりに動作し、その後に衝突によって爆発や何らかのエフェクトを起こしその後はラグドールの動作をさせたい場合に役立つ。
リジッドボディを長時間動かさない場合、完全にスリープ状態とさせることができる。言い換えると物理挙動のアップデート処理のなかで値が更新されることはなく、位置もそのままとなる。キネマティックリジッドボディコライダを通常のリジッドボディコライダの外に移動する場合、スリープ状態は解除され物理挙動のアップデート処理が再び始まる。つまり動かしたいスタティックコライダが複数あり、その上に異なるオブジェクトを落としたい場合にはキネマティック リジッドボディコライダを使用すべきである。
衝突アクションマトリクス
衝突する2つのオブジェクトの設定によっては同時に複数のアクションが走る可能性がある。以下のチャートにより二つのオブジェクトが衝突する際の動作をアタッチされているコンポーネント等を基準に整理しました。いくつかの組み合わせにおいては片方のオブジェクトのみ衝突の影響を受けるので、原則として「オブジェクトにリジッドボディがなければ物理挙動もない」ということをよく頭に入れておく必要がある。
| 衝突により衝突メッセージを受け取るか? | ||||||
| Static Collider | Rigidbody Collider | Kinematic Rigidbody Collider | Static Trigger Collider | Rigidbody Trigger Collider | Kinematic Rigidbody Trigger Collider | |
| Static Collider | Y | |||||
| Rigidbody Collider | Y | Y | Y | |||
| Kinematic Rigidbody Collider | Y | |||||
| Static Trigger Collider | ||||||
| Rigidbody Trigger Collider | ||||||
| Kinematic Rigidbody Trigger Collider | ||||||
| 衝突によりトリガーメッセージは発行されるか? | ||||||
| Static Collider | Rigidbody Collider | Kinematic Rigidbody Collider | Static Trigger Collider | Rigidbody Trigger Collider | Kinematic Rigidbody Trigger Collider | |
| Static Collider | Y | Y | ||||
| Rigidbody Collider | Y | Y | Y | |||
| Kinematic Rigidbody Collider | Y | Y | Y | |||
| Static Trigger Collider | Y | Y | Y | Y | ||
| Rigidbody Trigger Collider | Y | Y | Y | Y | Y | Y |
| Kinematic Rigidbody Trigger Collider | Y | Y | Y | Y | Y | Y |
Layer-Based Collision Detection
Unity3.XではLayer-Based Collision Detectionが機能として追加されましたので、すべてのレイヤーの組み合わせにおいて、どのレイヤーの組み合わせでは衝突が発生するかオブジェクトに設定を行うことができます。詳細情報についてはここ をクリックし参照のこと。
ボックスコライダ
ボックスコライダは箱の型をした基本型コリジョンプリミティブです。

山積みされたボックスコライダ
プロパティ
| Material | 使用する物理マテリアル への参照。物理マテリアルによりコライダが他と衝突したときの物理挙動の条件が定義されます |
| Is Trigger | オンにすると、コライダはイベントのトリガーとなり、物理エンジンが無視されます |
| Size | コライダの大きさ(X、Y、Z方向) |
| Center | オブジェクトのローカル座標系におけるコライダの位置 |
詳細
ボックスコライダは様々な形の直方体に変形することが出来ます。ドア、壁、床、等々に最適です。さらにラグドールの胴体や車など乗り物の車体でも効果的です。言うまでもなく簡単な箱や壷にもピッタリです。

標準的なボックスコライダ
コライダはリジッドボディ(剛体)と連動してUnity上での物理挙動を実現します。リジッドボディというのは「オブジェクトを物理法則にしたがって動かす」一方で、コライダはオブジェクトが互いに衝突することを可能に」します。コライダはリッジトボディとは別にオブジェクトにアタッチする必要があります。コライダとリジットボディは両方アタッチされなくとも良いですが、コリジョンの結果オブジェクトを動かすためにはリジッドボディがアタッチされていることは必須です。
二つのコライダ間で衝突が発生し、かつ少なくとも一つのオブジェクトにリジッドボディがアタッチされている場合、 3種類の 衝突 メッセージが 発行されます。これらのイベントはスクリプト作成時にハンドリングすることが出来ますし、独自のビヘイビアを定義することが出来るようになるとともにビルトインのNVIDIA PhysXエンジンを使用するかしないか自由に決めることができます。
トリガー
コライダをトリガーとしてマーキングする別の方法としてインスペクタでIsTrigger属性をチェックボックスにてオンするという方法があります。これによりトリガーは物理エンジンから無視されるようになり、コライダの衝突が発生した場合3種類の トリガー メッセージが 発行されます。トリガーはゲームにおける別イベントを発動するに際して便利です(カットシーンやドアの自動開閉、チュートリアルメッセージの表示、等々。想像を働かせれば様々なパターンで応用できます)
留意しないといけないこととして、二つのトリガーがトリガーイベント衝突時にを発行するためには、片方のトリガーはリジッドボディを含む必要があります。同様にトリガーが通常のコライダと衝突するためには片方のトリガーがリジッドボディを含む必要があります。全てのコライダ種類の一覧についていは図表にまとめてありますので、本項の「応用」セクションの「衝突アクションマトリクス」を参照下さい。
摩擦係数と反射係数
摩擦係数(friction)、反射係数(bouncyness)、やわらかさ(softness)は 物理マテリアル の属性として定義されています。スタンダードアセット のなかに頻繁に使用される物理マテリアルが含まれています。使用する際はPhysic Materialドロップダウンボックスをクリックし、どれかを選択します(例 Ice)。物理マテリアル については自分でカスタムのものを作成することができ、摩擦係数などすべて調整することが出来ます。
Mesh Collider
Mesh Colliderはメッシュアセット を取り、そのメッシュに基づいてコライダを構築します。複雑なメッシュの場合、プリミティブを使用した衝突検出するよりも遥かに正確です。ConvexがオンとなっているMesh Colliderは、他のMesh Colliderと衝突することができます。

階段状の物体で使用される'Mesh Collider
プロパティ
| Material | 使用する物理マテリアル への参照。物理マテリアルによりコライダが他と衝突したときの物理挙動の条件が定義されます |
| Is Trigger | オンにすると、コライダはイベントのトリガーとなり、物理エンジンが無視されます |
| Mesh | 衝突判定に使用するメッシュ(への参照)。 |
| Smooth Sphere Collisions | オンのとき、衝突メッシュの法線を滑らかにする。球が滑らかな表面(例 ある地形上を滑らかに移動させるため、地形のエッジを立てずにを転がす。)の上を転がす場合、有効にすべきです。 |
| Convex | オンにした場合、MeshColliderは他のMeshColliderと衝突します。ConvexオンのMeshColliderは三角形面数の上限255に制限されています。 |
詳細
MeshColliderの衝突判定は形状をゲームオブジェクトにアタッチされたMesh から構築されており、アタッチされたTransform のプロパティを読み込んでPositionとScaleを正しく設定します。
衝突メッシュは、バックフェースカリングを使用しています。オブジェクトが表示上バックフェースカリングされたメッシュと衝突した場合、物理的に衝突することもありません。
MeshColliderを使用した場合、いくつかの制限があります。通常、2つのMeshColliderは互いに衝突することができません。すべてのMeshColliderはどのようなプリミティブコライダと衝突することができます。メッシュのConvexがオンの場合、他のMeshColliderと衝突することができます。
コライダはリジッドボディ(剛体)と連動してUnity上での物理挙動を実現します。リジッドボディというのは「オブジェクトを物理法則にしたがって動かす」一方で、コライダはオブジェクトが互いに衝突することを可能に」します。コライダはリッジトボディとは別にオブジェクトにアタッチする必要があります。コライダとリジットボディは両方アタッチされなくとも良いですが、コリジョンの結果オブジェクトを動かすためにはリジッドボディがアタッチされていることは必須です。
二つのコライダ間で衝突が発生し、かつ少なくとも一つのオブジェクトにリジッドボディがアタッチされている場合、 3種類の 衝突 メッセージが 発行されます。これらのイベントはスクリプト作成時にハンドリングすることが出来ますし、独自のビヘイビアを定義することが出来るようになるとともにビルトインのNVIDIA PhysXエンジンを使用するかしないか自由に決めることができます。
トリガー
コライダをトリガーとしてマーキングする別の方法としてインスペクタでIsTrigger属性をチェックボックスにてオンするという方法があります。これによりトリガーは物理エンジンから無視されるようになり、コライダの衝突が発生した場合3種類の トリガー メッセージが 発行されます。トリガーはゲームにおける別イベントを発動するに際して便利です(カットシーンやドアの自動開閉、チュートリアルメッセージの表示、等々。想像を働かせれば様々なパターンで応用できます)
留意しないといけないこととして、二つのトリガーがトリガーイベント衝突時にを発行するためには、片方のトリガーはリジッドボディを含む必要があります。同様にトリガーが通常のコライダと衝突するためには片方のトリガーがリジッドボディを含む必要があります。全てのコライダ種類の一覧についていは図表にまとめてありますので、本項の「応用」セクションの「衝突アクションマトリクス」を参照下さい。
摩擦係数と反射係数
摩擦係数(friction)、反射係数(bouncyness)、やわらかさ(softness)は 物理マテリアル の属性として定義されています。スタンダードアセット のなかに頻繁に使用される物理マテリアルが含まれています。使用する際はPhysic Materialドロップダウンボックスをクリックし、どれかを選択します(例 Ice)。物理マテリアル については自分でカスタムのものを作成することができ、摩擦係数などすべて調整することが出来ます。
ヒント
- Convexがオンとなっていない限り、Mesh Colliderは、他のMesh Colliderと互いに衝突することができません。したがって、背景における物体のような背景のオブジェクトにも有用である。
- ConvexオンのMeshColliderは三角形面数の上限255です。
- 物理挙動のあるオブジェクトにおいて、プリミティブなコライダのほうがより計算コストがかからない。
- GameObjectにMesh Colliderをアタッチした場合、ゲームオブジェクトのレンダリングに使用されているメッシュがMeshプロパティにデフォルト設定される。異なるメッシュを割り当てれば、変更することができます。
- 複数のコライダをオブジェクトに追加するためには、子ゲームオブジェクトを作成し、コライダを各々にアタッチします。この方法でコライダは独立して動作させることが出来ます。
- シーンビューのギズモをみることでオブジェクトとコライダの大きさを確かめることができます
- コライダはオブジェクトと大きさを出来るかぎりマッチさせるのが良いです。均等でない大きさ(各々の軸方向での大きさが異なる)の場合、メッシュコライダでないかぎりオブジェクトと完全にマッチさせることは出来ません
- トランスフォームコンポーネントを通してオブジェクトを移動させつつ、衝突、トリガーのメッセージを受け取るためには動いているオブジェクトにリジッドボディをアタッチする必要があります。
物理特性マテリアル
Physic Material を使用して、衝突するオブジェクトの摩擦や跳ね返り効果を調整できます。
物理特性マテリアルを作成するには、メニューバーから を選択します。 次に、シーン内の Collider にプロジェクト ビューから物理特性マテリアルをドラッグします。

物理特性マテリアル Inspector'
プロパティ
| Dynamic Friction | すでに移動中の場合に使用される摩擦。 通常は、0 から 1 の間の値を使用します。0 の場合、氷のような感じになります。1 の場合、多くの力または重力がオブジェクトを押さない限り、非常に素早く停止します。 |
| Static Friction | オブジェクトが面上で静止したままの場合に使用される摩擦。 通常は、0 から 1 の間の値を使用します。0 の場合、氷のような感じになります。1 の場合、オブジェクトをかなり強く動かそうとします。 |
| Bounciness | 面の跳ね返し度合い。 0 の場合、跳ね返りません。 1 の場合はエネルギー損失なしで跳ね返ります。 |
| Friction Combine Mode | 2 つの衝突するオブジェクトの摩擦がどのように結合されるか。 |
| Average | 2 つの摩擦値が平均化されます。 |
| Min | 2 つの摩擦値の内最小の値が使用されます。 |
| Max | 2 つの摩擦値の内最大の値が使用されます。 |
| Multiply | 2 つの摩擦値が互いに乗算されます。 |
| Bounce Combine | 2 つの衝突するオブジェクトの跳ね返し度合いがどのように結合されるか。 Friction Combine Mode と同じです。 |
| Friction Direction 2 | 異方性の方向。 この方向が 0 でない場合、異方性摩擦が有効になります。 Dynamic Friction 2 と Static Friction 2 Friction Direction 2に沿って適用されます。 |
| Dynamic Friction 2 | 異方性摩擦を有効にすると、DynamicFriction2 が、Friction Direction 2 に沿って適用されます。 |
| Static Friction 2 | 異方性摩擦を有効にすると、StaticFriction2 が、Friction Direction 2 に沿って適用されます。 |
詳細
摩擦は、面が互いに外れるのを防ぐ量です。 この値は、オブジェクトを重ねる時に重要です。 摩擦には動的と静的の 2 種類があります。 Static frictionは、オブジェクトが静止している際に使用されます。 これは、オブジェクトが動き始めるのを防ぎます。 強い力が加えられると、オブジェうとは動き始めます。 動き始めると、Dynamic Frictionが作用し始めます。 オブジェクトが別のオブジェクトと接触中に、Dynamic Frictionはオブジェクトを減速させようとします。
ヒント
- メイン キャラクターには、標準の物理特性マテリアルを使用しないでください。 カスタマイズしたマテリアルを使用して、より完璧なキャラクターを実現してください。
ヒンジ ジョイント
Hinge Joint は、2 つの Rigidbodies をグループ化し、互いにヒンジで連結されているかのように動くよう制約します。 ドアに最適ですが、鎖や振り子などをモデル化するのにも使用できます。

ヒンジ ジョイント Inspector'
プロパティ
| Connected Body | ジョイントが依存するリジッドボディへのオプションの参照。 設定しないと、ジョイントはワールドに接続します。 |
| Anchor | ボディが揺れる中心となる軸の位置。 この位置はローカルなスペースで定義されます。 |
| Axis | ボディが揺れる中心となる軸の方向。 この方向はローカルなスペースで定義されます。 |
| Use Spring | スプリングは、リジッドボディをその連結されたボディと比較して、一定の角度に到達させます。 |
| Spring | Use Springを有効にした場合に使用されるスプリングのプロパティ。 |
| Spring | オブジェクトが前述の位置に移動するのに出す力。 |
| Damper | この値が高いほど、オブジェクトの速度は低下します。 |
| Target Position | スプリングの対象角度。 スプリングは、°で測定されたこの角度に向けて引っ張られます。 |
| Use Motor | モーターはオブジェクトを回転させます。 |
| Motor | Use Motorを有効にした場合に使用されるモーターのプロパティ。 |
| Target Velocity | オブジェクトが達成しようとする速度。 |
| Force | オブジェクトが前述の速度を達成するのに適用される力。 |
| Free Spin | 有効にすると、モーターは回転にブレーキをかけるのに使用されず、加速にのみ使用されます。 |
| Use Limits | 有効にすると、MinとMax値内にヒンジの角度が制限されます。 |
| Limits | Use Limitsを有効にした場合に使用される制限のプロパティ。 |
| Min | 回転が到達できる最低角度。 |
| Max | 回転が到達できる最高角度。 |
| Min Bounce | 最小の停止に到達した際にオブジェクトが跳ね返る量。 |
| Max Bounce | 最大の停止に到達した際にオブジェクトが跳ね返る量。 |
| Break Force | このジョイントが分解するのに適用される必要のある力。 |
| Break Torque | このジョイントが分解するのに適用される必要のあるトルク。 |
詳細
1 つのヒンジを GameObject に適用する必要があります。 このヒンジは、Anchorプロパティで指定した点で回転し、指定したAxisプロパティ周辺で移動します。 ジョイントのConnected BodyプロパティにGameObject を割り当てる必要はありません。 ジョイントの Transform を追加したオブジェクトのトランスフォームに依存させたい場合にのみ、GameObject をConnected Bodyプロパティに割り当てる必要があります。
ドアのヒンジがどのように機能するかを考えましょう。 この場合のAxisは上で、Y 軸に沿って正になります。 Anchorは、ドアと壁の間の交差部のどこかに置かれます。 ジョイントは、デフォルトでワールドに連結されるので、壁をConnected Bodyに割当てる必要はありません。
次は、ドギー ドアのヒンジについて考えましょう。 ドギー ドアのAxisは横で、相対的な X 軸に沿って正になります。 メイン ドアをConnected Bodyに割当てる必要があるため、ドギー ドアのヒンジは、メイン ドアのリジッドボディに依存します。
鎖
複数のヒンジ ジョイントを連結して、鎖を作成することも出来ます。 鎖の各連結部分にジョイントを追加して、次の連結部をConnected Bodyとして追加します。
ヒント
- Connected Bodyが機能するよう、ジョイントに割り当てる必要はありません。
- 動的なダメージ システムを作成するには、Break Forceを使用します。 プレイヤーが、ロケット ランチャーで爆破する、あるいは車で突っ込んで、ドアとヒンジを切り離すことができるので非常に便利です。
- Spring、MotorおよびLimitsプロパティにより、ジョイントの動作を微調整できます。
スプリング ジョイント
Spring Joint は、2 つの Rigidbody をグループ化し、スプリングで連結されているかのように動くよう制約します。

スプリング ジョイント Inspector
プロパティ
| Connected Body | ジョイントが依存する剛体へのオプションのオプションの参照。 |
| Anchor | ジョイントの中心を定義するオブジェクトのローカル空間での位置 (静止時)。 これは、オブジェクトがびょうがされる先の点ではありません。 |
| X | X 軸に沿ったジョイントのローカル点の位置。 |
| Y | Y 軸に沿ったジョイントのローカル点の位置。 |
| Z | Z 軸に沿ったジョイントのローカル点の位置。 |
| Spring | スプリングの強さ。 |
| Damper | 有効な場合にスプリングを減らす量。 |
| Min Distance | この距離を超えると、スプリングが起動しません。 |
| Max Distance | この距離を未満の場合、スプリングが起動しません。 |
| Break Force | このジョイントが分解するのに適用される必要のある力。 |
| Break Torque | このジョイントが分解するのに適用される必要のあるトルク。 |
詳細
スプリング ジョイントにより、剛体 GameObject は特定の目標:位置に引っ張られます。 この位置は、別のリジッド ボディの GameObject か世界のいずれかになります。 GameObject がこの目標位置から更に離れると、スプリング ジョイントがその元の目標''位置に引き戻す力を加えます。 これにより、ボムバンドやパチンコに非常に似た効果を作成できます。
スプリングの目標位置は、スプリング ジョイントを作成するか、再生モードに入った時に、AnchorからConnected Body(または世界)までの相対位置によって決定されます。 これにより、ジョイントしたキャラクターまたはオブジェクトをエディタで設定する際ににスプリング ジョイントが効率的になりますが、スプリングのプッシュ / プル 動作をスクリプティングを通じて、ランタイムで作成するのがより難しくなります。 スプリング ジョイントを使用して、主に GameObject の位置を変更したい場合、リジッドボディで空の GameObject を作成し、それをジョイントされたオブジェクトのConnected Rigidbodyに設定します。 次にスクリプティングで、Connected Rigidbodyの位置を変更でき、期待通りにスプリングが移動します。
Connected Rigidbody
- ジョイントを起動させるのに、Connected Bodyがを使用する必要はありません。 一般に、オブジェクトの位置および / または回転が依存していない場合のみに使用する必要があります。 Connected Rigidbodyがない場合は、スプリングはワールドに接続します。
スプリングとダンパー
スプリングは、オブジェクトをその目的の位置に引っ張り戻す力の強さです。 0 の場合、オブジェクトに引っ張る力はかからず、スプリング ジョイントがかからないかのよう動作します。
Damperは、Springの力に対する抵抗力です。 この値が低いほど、オブジェクトのスプリング力は強くなります。 Damperが増えると、ジョイントによる跳ね返りの量は減ります。
Min & Max Distance
オブジェクトの位置がMinとMax Distances間にある場合、ジョイントはオブジェクトに適用されません。 この位置を、有効にするジョイントに対して、これらの値外に移動させる必要があります。
ヒント
- ジョイントが機能するよう、Connected Bodyをジョイントに割り当てる必要はありません。
- 再生モードに入る前に、エディタでジョイント オブジェクトの理想的な位置を設定します。
- スプリング ジョイントは、リジッドボディを追加する必要があります。

iOS
iOS physics optimization hints can be found here.
RandomNumbers
Randomly chosen items or values are important in many games. This sections shows how you can use Unity's built-in random functions to implement some common game mechanics.
Choosing a Random Item from an Array
Picking an array element at random boils down to choosing a random integer between zero and the array's maximum index value (which is equal to the length of the array minus one). This is easily done using the built-in Random.Range function:-
var element = myArray[Random.Range(0, myArray.Length)];
Note that Random.Range returns a value from a range that includes the first parameter but excludes the second, so using myArray.Length here gives the correct result.
Choosing Items with Different Probabilities
Sometimes, you need to choose items at random but with some items more likely to be chosen than others. For example, an NPC may react in several different ways when it encounters a player:-
- 50% chance of friendly greeting
- 25% chance of running away
- 20% chance of immediate attack
- 5% chance of offering money as a gift
You can visualise these different outcomes as a paper strip divided into sections each of which occupies a fraction of the strip's total length. The fraction occupied is equal to the probability of that outcome being chosen. Making the choice is equivalent to picking a random point along the strip's length (say by throwing a dart) and then seeing which section it is in.

In the script, the paper strip is actually an array of floats that contain the different probabilities for the items in order. The random point is obtained by multiplying Random.value by the total of all the floats in the array (they need not add up to 1; the significant thing is the relative size of the different values). To find which array element the point is "in", firstly check to see if it is less than the value in the first element. If so, then the first element is the one selected. Otherwise, subtract the first element's value from the point value and compare that to the second element and so on until the correct element is found. In code, this would look something like the following:-
function Choose(probs: float[]) {
var total = 0;
for (elem in probs) {
total += elem;
}
var randomPoint = Random.value * total;
for (i = 0; i < probs.Length; i++) {
if (randomPoint < probs[i])
return i;
else
randomPoint -= probs[i];
}
return probs.Length - 1;
}
Note that the final return statement is necessary because Random.value can return a result of 1. In this case, the search will not find the random point anywhere. Changing the line
if (randomPoint < probs[i])
...to a less-than-or-equal test would avoid the extra return statement but would also allow an item to be chosen occasionally even when its probability is zero.
Shuffling a List
A common game mechanic is to choose from a known set of items but have them arrive in random order. For example, a deck of cards is typically shuffled so they are not drawn in a predictable sequence. You can shuffle the items in an array by visiting each element and swapping it with another element at a random index in the array:-
function Shuffle(deck: int[]) {
for (i = 0; i < deck.Length; i++) {
var temp = deck[i];
var randomIndex = Random.Range(0, deck.Length);
deck[i] = deck[randomIndex];
deck[randomIndex] = temp;
}
}
Choosing from a Set of Items Without Repetition
A common task is to pick a number of items randomly from a set without picking the same one more than once. For example, you may want to generate a number of NPCs at random spawn points but be sure that only one NPC gets generated at each point. This can be done by iterating through the items in sequence, making a random decision for each as to whether or not it gets added to the chosen set. As each item is visited, the probability of its being chosen is equal to the number of items still needed divided by the number still left to choose from.
As an example, suppose that ten spawn points are available but only five must be chosen. The probability of the first item being chosen will be 5 / 10 or 0.5. If it is chosen then the probability for the second item will be 4 / 9 or 0.44 (ie, four items still needed, nine left to choose from). However, if the first was not chosen then the probability for the second will be 5 / 9 or 0.56 (ie, five still needed, nine left to choose from). This continues until the set contains the five items required. You could accomplish this in code as follows:-
var spawnPoints: Transform[];
function ChooseSet(numRequired: int) {
var result = new Transform[numRequired];
var numToChoose = numRequired;
for (numLeft = spawnPoints.Length; numLeft > 0; numLeft--) {
// Adding 0.0 is simply to cast the integers to float for the division.
var prob = numToChoose + 0.0 / numLeft + 0.0;
if (Random.value <= prob) {
numToChoose--;
result[numToChoose] = spawnPoints[numLeft - 1];
if (numToChoose == 0)
break;
}
}
return result;
}
Note that although the selection is random, items in the chosen set will be in the same order they had in the original array. If the items are to be used one at a time in sequence then the ordering can make them partly predictable, so it may be necessary to shuffle the array before use.
Random Points in Space
A random point in a cubic volume can be chosen by setting each component of a Vector3 to a value returned by Random.value:-
var randVec = Vector3(Random.value, Random.value, Random.value);
This gives a point inside a cube with sides one unit long. The cube can be scaled simply by multiplying the X, Y and Z components of the vector by the desired side lengths. If one of the axes is set to zero, the point will always lie within a single plane. For example, picking a random point on the "ground" is usually a matter of setting the X and Z components randomly and setting the Y component to zero.
When the volume is a sphere (ie, when you want a random point within a given radius from a point of origin), you can use Random.insideUnitSphere multiplied by the desired radius:-
var randWithinRadius = Random.insideUnitSphere * radius;
Note that if you set one of the resulting vector's components to zero, you will *not* get a correct random point within a circle. Although the point is indeed random and lies within the right radius, the probability is heavily biased toward the edge of the circle and so points will be spread very unevenly. You should use Random.insideUnitCircle for this task instead:-
var randWithinCircle = Random.insideUnitCircle * radius;
Particle Systems
注意:これは新パーティクルシステム(Shuriken)についてのドキュメントです。旧パーティクルシステムについては Legacy Particle System を参照して下さい。
パーティクルシステム(Shuriken)
Unityの中のパーティクルシステムは、大量の煙、蒸気、炎やその他の大気圏エフェクトを作るために使用されます。

新規にParticle Systemを作成するにはParticle Systemゲームオブジェクトを作成(メニューで -> -> を選択)するか、空のGameObjectを作成して ParticleSystemコンポーネントを追加します。(メニューで->を選択)
The Particle System Inspector (Shuriken)
The shows one particle system at a time (the currently selected one), and it looks like this:

Individual particle systems can take on various complex behaviors by using Modules.
They can also be extended by being grouped together into Particle Effects.
If you press the button , this will open up the Extended , that shows all of the particle systems under the same root in the scene tree. For more information on particle system grouping, see the section on Particle Effects.
シーンビューの編集
パーティクルシステムを作成および編集するときまたは拡張されたを使用し、その変更内容はに反映されます。シーンビューにはがあり、現在選択したのプレイバックを編集モードで制御することが出来、アクションとして、、およびが用意されています。

'Playback Time''ラベルをドラッグすることにより、プレイバック時間をこすって調整することができます。全てのプレイバックコントロールにはショートカットキーがあり、それらはPreferencesウィンドウ にてカスタム設定できます。
パーティクルシステムCurve Editor
MinMax Curve
パーティクルシステム モジュールのプロパティの多くは時間の経過とともに値が変化します。そういう変更は MinMax Curves(最大最小カーブ)を通じて表現できます。 これらの時間により活性化するプロパティ(たとえばsize およびspeed)、は右側にプルダウンメニューがあり、それを選択することが出来ます。 Attach:MinMaxDropDown.png Δ
Constant: プロパティの値が時間とともに変化しないので、Curve Editorに表示されません。
Random between constants: プロパティの値は、2つの定数間のランダム値に設定されます。
Curve:プロパティの値が時間とともにCurve Editorのカーブに基づいて変化します。

カーブでアニメーションされたプロパティ
Random between curves: プロパティの値は、2つの最大、最小のカーブの間でランダムに設定され、値は時とともに生成されたカーブに基づいて変化します。

Random Between Two Curvesとして活性化されたプロパティ。
Curve Editorで0とDurationプロパティで指定した値の間で時間を"X"'軸上に散らせたうえで、"Y"軸は各々の時間における活性化されたプロパティの値を示します。 "Y"軸の範囲はCurve Editor右上隅にある数字フィールドで調整することができます。現時点で、Curve Editorはパーティクルシステムの全てのカーブを同じウィンドウで表示します。
同じCurve Editorで複数のカーブを表示。
なお、右下隅の" - "は現在選択されているカーブを削除する一方で"+"はそれを"最適化"します(これにより、高々3つのキーをもつパラメータ化されたカーブとなります)。
3D空間でのベクトルを表現する活性化されたプロパティにはTripleMinMaxカーブを用意していて、これは単にx軸、y軸、z軸を横に並べたシンプルなカーブであり、次のように表示されます:

Curve Editorで複数のカーブの管理
Curve Editorで混乱を防ぐためには、インスペクタでそクリックすることで、カーブのオンとオフに切り替えることが可能です。パーティクルシステムのCurve Editorを、あなたがこのようなものが表示されるはずです。その後Particle System Curvesタイトルバー上で右クリックして、インスペクタから切り離すことができます:
他のウィンドウと同様で、Curve Editorのウィンドウをドックすることが出来ます。
カーブの働きに関する情報については、Curve Editorドキュメンテーションを参照のこと。
パーティクルシステムの色およびグラデーション(Shuriken)

色を扱うプロパティについて、 Particle Systemは Color and Gradient Editor``を使用しています。 それはCurve Editor と同じような働きをします。
カラーベースのプロパティは右側にプルダウンメニューがあり、好きな方法を選択することが出来ます。

Color: 色は常に同じになります(Color Picker を参照してください)。
Gradient: グラデーション(RGBA)はGradient Editor で編集したとおりに、時間とともに変化します。
Random Between Two Colors: 色は時間とともに変化し、Color Picker で指定した二つの値の間でランダムに選択されます。
Random Between Two Gradients: グラデーション(RGBA)はGradient Editor で指定した二つの値の間でランダムに選択され、時間とともに変化します。
- パーティクルシステムCurve Editor
- パーティクルシステムの色およびグラデーション(Shuriken)
- Gradient Editor
- Particle System Inspector
- Introduction to Particle System Modules (Shuriken)
- Particle System Modules (Shuriken)
- Particle Effects (Shuriken)
Particle System Curve Editor
MinMax Curve
パーティクルシステム モジュールのプロパティの多くは時間の経過とともに値が変化します。そういう変更は MinMax Curves(最大最小カーブ)を通じて表現できます。 これらの時間により活性化するプロパティ(たとえばsize およびspeed)、は右側にプルダウンメニューがあり、それを選択することが出来ます。 Attach:MinMaxDropDown.png Δ
Constant: プロパティの値が時間とともに変化しないので、Curve Editorに表示されません。
Random between constants: プロパティの値は、2つの定数間のランダム値に設定されます。
Curve:プロパティの値が時間とともにCurve Editorのカーブに基づいて変化します。

カーブでアニメーションされたプロパティ
Random between curves: プロパティの値は、2つの最大、最小のカーブの間でランダムに設定され、値は時とともに生成されたカーブに基づいて変化します。

Random Between Two Curvesとして活性化されたプロパティ。
Curve Editorで0とDurationプロパティで指定した値の間で時間を"X"'軸上に散らせたうえで、"Y"軸は各々の時間における活性化されたプロパティの値を示します。 "Y"軸の範囲はCurve Editor右上隅にある数字フィールドで調整することができます。現時点で、Curve Editorはパーティクルシステムの全てのカーブを同じウィンドウで表示します。
同じCurve Editorで複数のカーブを表示。
なお、右下隅の" - "は現在選択されているカーブを削除する一方で"+"はそれを"最適化"します(これにより、高々3つのキーをもつパラメータ化されたカーブとなります)。
3D空間でのベクトルを表現する活性化されたプロパティにはTripleMinMaxカーブを用意していて、これは単にx軸、y軸、z軸を横に並べたシンプルなカーブであり、次のように表示されます:

Curve Editorで複数のカーブの管理
Curve Editorで混乱を防ぐためには、インスペクタでそクリックすることで、カーブのオンとオフに切り替えることが可能です。パーティクルシステムのCurve Editorを、あなたがこのようなものが表示されるはずです。その後Particle System Curvesタイトルバー上で右クリックして、インスペクタから切り離すことができます:
他のウィンドウと同様で、Curve Editorのウィンドウをドックすることが出来ます。
カーブの働きに関する情報については、Curve Editorドキュメンテーションを参照のこと。
Page last updated: 2012-11-22Particle System Color Editor

色を扱うプロパティについて、 Particle Systemは Color and Gradient Editor``を使用しています。 それはCurve Editor と同じような働きをします。
カラーベースのプロパティは右側にプルダウンメニューがあり、好きな方法を選択することが出来ます。

Color: 色は常に同じになります(Color Picker を参照してください)。
Gradient: グラデーション(RGBA)はGradient Editor で編集したとおりに、時間とともに変化します。
Random Between Two Colors: 色は時間とともに変化し、Color Picker で指定した二つの値の間でランダムに選択されます。
Random Between Two Gradients: グラデーション(RGBA)はGradient Editor で指定した二つの値の間でランダムに選択され、時間とともに変化します。
Page last updated: 2012-11-21Particle System Gradient Editor

The Gradient Editor is used for describing change of gradient with time. It animates the color (RGB-space, described by the markers at the bottom), and Alpha (described by the markers at the top).
You can add new markers for Alpha values by clicking near the top of the rectangle, and new ticks for Color by clicking near the bottom. The markers can be intuitively dragged along the timeline.
If an Alpha tick is selected, you can edit the value for that tick by dragging the alpha value.
If a Color tick is selected, the color can be modified by double clicking on the tick or clicking on the color bar.
To remove a marker, just drag it off the screen.
Page last updated: 2012-08-28Particle System Inspector
The Particle System Inspector (Shuriken)
The shows one particle system at a time (the currently selected one), and it looks like this:

Individual particle systems can take on various complex behaviors by using Modules.
They can also be extended by being grouped together into Particle Effects.
If you press the button , this will open up the Extended , that shows all of the particle systems under the same root in the scene tree. For more information on particle system grouping, see the section on Particle Effects.
Page last updated: 2012-08-28Particle System Modules Intro
A Particle System consists of a predefined set of modules that can be enabled and disabled. These modules describe the behavior of particles in an individual particle system.
Initially only a few modules are enabled. Addding or removing modules changes the behavior of the particle system. You can add new modules by pressing the (+) sign in the top-right corner of the Particle System Inspector. This pops up a selection menu, where you can choose the module you want to enable.
An alternative way to work with modules is to select "Show All Modules", at which point all of the modules will show up in the inspector.
Then you can enable / disable modules directly from the inspector by clicking the checkbox to the left.

Most of the properties are controllable by curves (see Curve Editor). Color properties are controlled via gradients which define an animation for color (see Color Editor).
For details on individual modules and their properties, see Particle System Modules
Page last updated: 2012-10-25Particle System Modules40
This page is dedicated to individual modules and their properties. For introduction to modules see this page
Initial Module

This module is always present, cannot be removed or disabled.
| Duration | The duration the Particle System will be emitting particles. |
| Looping | Is the Particle System looping. |
| Prewarm | Only looping systems can be prewarmed which means that the Particle System will have emitted particles at start as if it had already emitted particles one cycle. |
| Start Delay | Delay in seconds that this Particle System will wait before emitting particles. Note prewarmed looping systems cannot use a start delay. |
| Start Lifetime | The lifetime of particles in seconds (see MinMaxCurve). |
| Start Speed | The speed of particles when emitted.(see MinMaxCurve). |
| Start Size | The size of particles when emitted. (see MinMaxCurve). |
| Start Rotation | The rotation of particles when emitted. (see MinMaxCurve). |
| Start Color | The color of particles when emitted. (see MinMaxGradient). |
| Gravity Modifier | The amount of gravity that will affect particles during their lifetime. |
| Inherit Velocity | Factor for controlling the amount of velocity the particles should inherit of the transform of the Particle System (for moving Particle Systems). |
| Simulation Space | Simulate the Particle System in local space or world space. |
| Play On Awake | If enabled the Particle System will automatically start when it's created. |
| Max Particles | Max number of particles the Particle System will emit. |
Emission Module

Controls the rate of particles being emitted and allows spawning large groups of particles at certain moments (over Particle System duration time). Useful for explosions when a bunch of particles need to be created at once.
| Rate | Amount of particles emitted over Time (per second) or Distance (per meter). (see MinMaxCurve) |
| Bursts (Time option only) | Add bursts of particles that occur within the duration of the Particle System |
| Time and Number of Particles | Specify time (in seconds within duration) that a specified amount of particles should be emitted. Use the + and - for adjusting number of bursts. |
Shape Module

Defines the shape of the emitter: Sphere, Hemishpere, Cone, Box and Mesh. Can apply initial force along the surface normal or random direction.
| Sphere | |
| Radius | Radius of the sphere (can also be manipulated by handles in the Scene View) |
| Emit from Shell | Emit from shell of the sphere. If disabled, particles will be emitted from the volume of the sphere. |
| Random Direction | Should particles have have a random direction when emitted or a direction along the surface normal of the sphere |
| Hemisphere | |
| Radius | Radius of the hemisphere (can also be manipulated by handles in the Scene View) |
| Emit from Shell | Emit from shell of the hemisphere. If disabled particles will be emitted from the volume of the hemisphere. |
| Random Direction | Should particles have have a random direction when emitted or a direction along the surface normal of the hemisphere. |
| Cone | |
| Angle | Angle of the cone. If angle is 0 then particles will be emitted in one direction. (can also be manipulated by handles in the Scene View) |
| Radius | A value larger than 0 when basically create a capped cone, using this will change emission from a point to a disc.(can also be manipulated by handles in the Scene View) |
| Emit From | Determines where emission originates from. Possible values are Base, Base Shell, Volume and Volume Shell. |
| Box | |
| Box X | Scale of box in X (can also be manipulated by handles in the Scene View) |
| Box Y | Scale of box in Y (can also be manipulated by handles in the Scene View) |
| Box Z | Scale of box in Z (can also be manipulated by handles in the Scene View) |
| Random Direction | Should particles have have a random direction when emitted or a direction along the Z-axis of the box |
| Mesh | |
| Type | Particles can be emitted from either Vertex, Edge or Triangle |
| Mesh | Select Mesh that should be used as emission shape |
| Random Direction | Should particles have have a random direction when emitted or a direction along the surface of the mesh |
Velocity Over Lifetime Module

Directly animates velocity of the particle. Mostly useful for particles which has complex physical, but simple visual behavior (like smoke with turbulence and temperature loss) and has little interaction with physical world.
| XYZ | Use either constant values for curves or random between curves for controlling the movement of the particles. See MinMaxCurve. |
| Space | Local / World: Are the velocity values in local space or world space |
Limit Velocity Over Lifetime Module

Basically can be used to simulate drag. Dampens or clamps velocity, if it is over certain threshold. Can be configured per axis or per vector length.
| Separate Axis | Use for setting per axis control. |
| Speed | Specify magnitude as constant or by curve that will limit all axes of velocity. |
| XYZ | Control each axis seperately. See MinMaxCurve. |
| Dampen | (0-1) value that controls how much the exceeding velocity should be dampened. For example, a value of 0.5 will dampen exceeding velocity by 50% |
Force Over Lifetime Module

| XYZ | Use either constant values for curves or random between curves for controlling the force applied to the particles. See MinMaxCurve. |
| Randomize | Randomize the force applied to the particles every frame |
Color Over Lifetime Module

| Color | Controls the color of each particle during its lifetime. If some particles have a shorter lifetime than others, they will animate faster. Use constant color, random between two colors, animate it using gradient or specify a random color using two gradients (see Gradient). Note that this colour will be multiplied by the value in the Start Color property - if the Start Color is black then Color Over Lifetime will not affect the particle. |
| Color Scale | Use the color scale for easy adjustment of color or gradient. |
Color By Speed Module

Animates particle color based on its speed. Remaps speed in the defined range to a color.
| Color | Color used for remapping of speed. Use gradients for varying colors. See MinMaxGradient. |
| Color Scale | Use the color scale for easy adjustment of color or gradient. |
| Speed Range | The min and max values for defining the speed range which is used for remapping a speed to a color. |
Size Over Lifetime Module

| Size | Controls the size of each particle during its lifetime. Use constant size, animate it using a curve or specify a random size using two curves. See MinMaxCurve. |
Size By Speed Module

| Size | Size used for remapping of speed. Use curves for varying sizes. See MinMaxCurve. |
| Speed Range | The min and max values for defining the speed range which is used for remapping a speed to a size. |
Rotation Over Lifetime Module

Specify values in degrees.
| Rotational Speed | Controls the rotational speed of each particle during its lifetime. Use constant rotational speed, animate it using a curve or specify a random rotational speed using two curves. See MinMaxCurve. |
Rotation By Speed Module

| Rotational Speed | Rotational speed used for remapping of a particle's speed. Use curves for varying rotational speeds. See MinMaxCurve. |
| Speed Range | The min and max values for defining the speed range which is used for remapping a speed to a rotational speed. |
External Forces Module

| Multiplier | Scale factor that determines how much the particles are affected by wind zones (i.e., the wind force vector is multiplied by this value). |
Collision Module

Set up collisions for the particles of this Particle System. World and planar collisions are supported. Planar collision is very efficient for simple collision detection. Planes are set up by referencing an existing transform in the scene or by creating a new empty GameObject for this purpose. Another benefit of planar collision is that particle systems with collision planes can be set up as prefabs. World collision uses raycasts so must be used with care in order to ensure good performance. However, for cases where approximate collisions are acceptable world collision in Low or Medium quality can be very efficient.
Properties common for any Collision Module
| Planes/World | Specify the collision type: Planes for planar collision or World for world collisions. |
| Dampen | (0-1) When the particle collides, it will keep this fraction of its speed. Unless it is set to 1.0, the particle will become slower after collision. |
| Bounce | (0-1) When the particle collides, it will keep this fraction of the component of the velocity, which is normal to the plane of collision. |
| Lifetime Loss | (0-1) The fraction of Start Lifetime lost on each collision. When lifetime reaches 0, the particle dies. For example if a particle should die on first collision, set this to 1.0. |
| Min Kill Speed | The minimum speed of a particle before it is killed. |
Properties available only in the Planes Mode
| Planes | Planes are defined by assigning a reference to a transform. This transform can be any transform in the scene and can be animated. Multiple planes can be used. Note: the Y-axis is used as the normal of a plane. |
| Visualization | Only used for visualizing the planes: Grid or Solid. |
| Grid | Rendered as gizmos and is useful for quick indication of position and orientation in the world. |
| Solid | Renders a plane in the scene which is useful for exact positioning of a plane. |
| Scale Plane | Resizes the visualization planes. |
| Particle Radius | The assumed radius of the particle for collision purposes. |
Properties available only in the World Mode
| Collides With | Filter for specifying colliders. Select Everything to colllide with the whole world. |
| Collision Quality | The quality of the world collision. |
| High | All particles performs a scene raycast per frame. Note: This is CPU intensive, it should only be used with 1000 simultaneous particles (scene wide) or less. |
| Medium | The particle system receives a share of the globally set Particle Raycast Budget (see Particle Raycast Budget) in each frame. Particles are updated in a round-robin fashion where particles that do not receive a raycast in a given frame will lookup and use older collisions stored in a cache. Note: This collision type is approximate and some particles will leak, particularly at corners. |
| Low | Same as Medium except the particle system is only awarded a share of the Particle Raycast Budget every fourth frame. |
| Voxel Size | Density of the voxels used for caching intersections used in the Medium and Low quality setting. The size of a voxel is given in scene units. Usually, 0.5 - 1.0 should be used (assuming metric units). |
Sub Emitter Module

This is a powerful module that enables spawning of other Particle Systems at the follwing particle events: birth, death or collision of a particle.
| Birth | Spawn another Particle System at birth of each particle in this Particle System |
| Death | Spawn another Particle System at death of each particle in this Particle System |
| Collision | Spawn another Particle System at collision of each particle in this Particle System. IMPORTANT: Collision needs to be set up using the Collision Module. See Collision Module |
Texture Sheet Animation Module

Animates UV coordinates of the particle over its lifetime. Animation frames can be presented in a form of a grid or every row in the sheet can be separate animation. The frames are animated with curves or can be a random frame between two curves. The speed of the animation is defined by "Cycles".
| Tiles | Define the tiling of the texture. |
| Animation | Specify the animation type: Whole Sheet or Single Row. |
| Whole Sheet | Uses the whole sheet for uv animation |
| - Frame over Time | Controls the uv animation frame of each particle during its lifetime over the whole sheet. Use constant, animate it using a curve or specify a random frame using two curves. See MinMaxCurve. |
| Single Row | Uses a single row of the texture sheet for uv animation |
| - Random Row | If checked the start row will be random and if unchecked the row index can be specified (first row is 0). |
| - Frame over Time | Controls the uv animation frame of each particle during its lifetime within the specified row. Use constant, animate it using a curve or specify a random frame using two curves. See MinMaxCurve. |
| - Cycles | Specify speed of animation. |
Renderer Module

The renderer module exposes the ParticleSystemRenderer component's properties. Note that even though a GameObject has a ParticleSystemRenderer component, its properties are only exposed here, when this module is removed/added. It is actually the ParticleSystemRenderer component that is added or removed.
| Render Mode | Select one of the following particle render modes |
| Billboard | Makes the particles always face the camera |
| Stretched Billboard | Particles are stretched using the following parameters |
| - Camera Scale | How much the camera speed is factored in when determining particle stretching |
| - Speed Scale | Defines the length of the particle compared to its speed |
| - Length Scale | Defines the length of the particle compared to its width |
| Horizontal Billboard | Makes the particles align with the Y axis |
| Vertical Billboard | Makes the particles align with the XZ plane while facing the camera |
| Mesh | Particles are rendered using a mesh instead of a quad |
| - Mesh | The reference to the mesh used for rendering particles |
| Normal Direction | Value from 0 to 1 that determines how much normals point toward the camera (0) and how much sideways toward the centre of the view (1). |
| Material | Material used by billboarded or mesh particles. |
| Sort Mode | The draw order of particles can be sorted by distance, youngest first, or oldest first. |
| Sorting Fudge | Use this to affect the draw order. Particle systems with lower sorting fudge numbers are more likely to be drawn last, and thus appear in front of other transparent objects, including other particles. |
| Cast Shadows | Should particles cast shadows? May or may not be possible depending on the material |
| Receive Shadows | Should particles receive shadows? May or may not be possible depending on the material |
| Max Particle Size | Set max relative viewport size. Valid values: 0-1 |
Particle System Grouping
An important feature of Unity's Particle System is that individual Particle Systems can be grouped by being parented to the same root. We will use the term Paricle Effect for such a group. Particle Systems belonging to the same Particle Effect, are played, stopped and paused together.
For managing complex particle effects, Unity provides a Particle Editor, which can be accessed from the Inspector, by pressing

Overview of the Particle System Editor
You can toggle between and in this Editor. will render the entire particle effect. will only render the selected particle systems. What is selected will be highlighted with a blue frame in the Particle Editor and also shown in blue in the Hierarchy view. You can also change the selection both from the Hierarchy View and the Particle Editor, by clicking the icon in the top-left corner of the Particle System. To do a multiselect, use Ctrl+click on windows and Command+click on the Mac.
You can explicitly control rendering order of grouped particles (or otherwise spatially close particle emitters) by tweaking Sorting Fudge property in the Renderer module.

Particle Systems in the same hierarchy are considered as part of the same Particle Effect. This hierarchy shows the setup of the effect shown above.
Page last updated: 2012-08-28Mecanim Animation System
Unity has a rich and sophisticated animation system called Mecanim. Mecanim provides:
- Easy workflow and setup of animations on humanoid characters.
- Animation retargeting - the ability to apply animations from one character model onto another.
- Simplified workflow for aligning animation clips.
- Convenient preview of animation clips, transitions and interactions between them. This allows animators to work more independently of programmers, prototype and preview their animations before gameplay code is hooked in.
- Management of complex interactions between animations with a visual programming tool.
- Animating different body parts with different logic.

Mecanim workflow
Workflow in Mecanim can be split into three major stages.
1. Asset preparation and import. This is done by artists or animators, with 3rd party tools, such as Max or Maya. This step is independent of Mecanim features.
2. Character setup for Mecanim, which can be done in 2 ways:
a. Humanoid character setup. Mecanim has a special workflow for humanoid models, with extended GUI support and retargeting. The setup involves creating and setting up an Avatar and tweaking Muscle definitions.
b. Generic character setup. This is for anything like creatures, animated props, four-legged animals, etc. Retargeting is not possible here, but you can still take advantage of the rich feature set of Mecanim, including everything described below.
3. Bringing characters to life. This involves setting up animation clips, as well as interactions between them, and involves setup of State Machines and Blend Trees, exposing Animation Parameters, and controlling animations from code.
Mecanim comes with a lot of new concepts and terminology. If at any point, you need to find out what something means, go to our Animation Glossary.
- A Glossary of Animation and Mecanim terms
- Asset Preparation and Import
- Working with humanoid animations
- Generic Animations in Mecanim
- Bringing characters to life
Legacy animation system
While Mecanim is recommended for use in most situations, especially for working humanoid animations, the Legacy animation system is still used in a variety of contexts. One of them is working legacy animations and code (content created before Unity 4.0). Another is controlling animation clips with parameters other than time (for example for controlling the aiming angle). For information on the Legacy animation system, see this section
Unity intends to phase out the Legacy animation system over time for all cases by merging the workflows into Mecanim.
Page last updated: 2012-11-02A glossary of animation and Mecanim terms
| Icon | Term | Description | Type of Concept | Usage/Comments |
|---|---|---|---|---|
| Animation Clip related terms | ||||
![]() | Animation Clip | Animation data that can be used for animated characters or simple animations. It is a simple "unit" piece of motion, such as (one specific instance of) "Idle", "Walk" or "Run" | sub-Asset | |
![]() | Body Mask | A specification for which body parts to include or exclude for a skeleton | Asset (.mask) | Used in Animation Layers and in the importer |
| Animation Curves | Curves can be attached to animation clips and controlled by various parameters from the game | |||
| Avatar related terms | ||||
![]() | Avatar | An interface for retargeting one skeleton to another | sub-Asset | |
| Retargeting | Applying animations created for one model to another | Process | ||
| Rigging | The prcoess of building a skeleton hierarchy of bone joints for your mesh | Process | done in an external tool, such as Max or Maya | |
| Skinning | The process of binding bone joints to the character's mesh or 'skin' | Process | done in an external tool, such as Max or Maya | |
| Muscle Definition | A Mecanim concept, which allows you to have a more intuitive control over the character's skeleton. When an Avatar is in place, Mecanim works in muscle space, which is more intuitive than bone space | |||
| T-pose | The pose in which the character has his arms straight out to the sides, forming a "T". The required pose for the character to be in, in order to make an Avatar | |||
| Bind-pose | The pose at which the character was modelled | |||
![]() | Human template | A pre-defined bone-mapping | Asset (.ht) | Used for matching bones from FBX files to the Avatar. |
| Animator and Animator Controller related terms | ||||
| Animator Component | Component on a model that animates that model using the Mecanim animation system. The component has a reference to an Animator Controller asset that controls the animation. | Component | ||
| Root Motion | Motion of character's root, whether it's controlled by the animation itself or externally. | |||
![]() | Animator Controller (Asset) | The Animator Controller controls animation through Animation Layers with Animation State Machines and Animation Blend Trees, controlled by Animation Parameters. The same Animator Controller can be referenced by multiple models with Animator components. | Asset (.controller) | |
| Animator Controller (Window) | The window where the Animator Controller Asset is visualized and edited. | Window | ||
| Animation Layer | An Animation Layer contains an Animation State Machine that controls animations of a model or part of it. An example of this is if you have a full-body layer for walking / jumping and a higher layer for upper-body motions such as throwing object / shooting. The higher layers take precedence for the body parts they control. | |||
| Animation State Machine | A graph controlling the interaction of Animation States. Each state references an Animation Blend Tree or a single Animation Clip. | |||
| Animation Blend Tree | Used for continuous blending between similar Animation Clips based on float Animation Parameters. | |||
| Animation Parameters | Used to communicate between scripting and the Animator Controller. Some parameters can be set in scripting and used by the controller, while other parameters are based on Custom Curves in Animation Clips and can be sampled using the scripting API. | |||
| Inverse Kinematics (IK) | The ability to control the character's body parts based on various objects in the world. | |||
| Non-Mecanim animation terms | ||||
| Animation Component | The component needed for non-Mecanim animations | Component | ||
Asset Preparation and Import
Humanoid meshes
In order to take full advantage of Mecanim's humanoid animation system and retargeting, you need to have a rigged and skinned humanoid type mesh.
- A character model is generally made up of polygons in a 3D package or converted to polygon or triangulated mesh, from a more complex mesh type before export.
- A joint hierarchy or skeleton which defines the bones inside the mesh and their movement in relation to one another, must be created to control the movement of the character. The process for creating the joint hierarchy is known as rigging.
- The mesh or skin must then be connected to the joint hierarchy in order to define which parts of the character mesh move when a given joint is animated. The process of connecting the skeleton to the mesh is known as skinning.
How to obtain humanoid models
There are three main ways to obtain humanoid models for with the Mecanim Animation system:
- Use a procedural character system or character generator such as Poser, Makehuman or Mixamo. Some of these systems will rig and skin your mesh (eg, Mixamo) while others will not. Furthermore, these methods may require that you reduce the number of polygons in your original mesh to make it suitable for use in Unity.
- Purchase demo examples and character content from the Unity Asset Store.
- Also, you can of course prepare your own character from scratch.
Export & Verify
Unity imports a number of different generic and native 3D file formats. The format we recommend for exporting and verifying your model is FBX 2012 since it will allow you to:
- Export the mesh with the skeleton hierarchy, normals, textures and animation
- Re-import into your 3D package to verify your animated model has exported as you expected
- Export animations without meshes
Further details
The following pages cover the stages of preparing and importing animation assets in greater depth
(back to Mecanim introduction)
Page last updated: 2012-11-01Preparing your own character
There are three main steps in creating an animated humanoid character from scratch: modelling, rigging and skinning.
Modelling
This is the process of creating your own humanoid mesh in a 3D modelling package - 3DSMax, Maya, Blender, etc. Although this is a whole subject in its own right, there are a few guidelines you can follow to ensure a model works well with animation in a Unity project.
- Observe a sensible topology. The exact nature of a "sensible" structure for your mesh is rather subtle but generally, you should bear in mind how the vertices and triangles of the model will be distorted as it is animated. A poor topology will not allow the model to move without unsightly distortion of the mesh. A lot can be learned by studying existing 3D character meshes to see how the topology is arranged and why.
- Be mindful of the scale of your mesh. Do a test import and compare the size of your imported model with a "meter cube" (the standard Unity cube primitive has a side length of one unit, so it can be taken as a 1m cube for most purposes). Check the units your 3D package is using and adjust the export settings so that the size of the model is in correct proportion to the cube. Unless you are careful, it is easy to create models without any notion of their scale and consequently end up with a set of objects that are disproportionate in size when they are imported into Unity.
- Arrange the mesh so that the character's feet are standing on the local origin or "anchor point" of the model. Since a character typically walks upright on a floor, it is much easier to handle if its anchor point (ie, its transform position) is directly on that floor.
- Model in a T-pose if you can. This will help allow space to refine polygon detail where you need it (e.g. underarms). This will also make it easier to position your rig inside the mesh.
- Clean up your model. Where possible, cap holes, weld verts and remove hidden faces, this will help with skinning, especially automated skinning processes.
Rigging
This is the process of creating a skeleton of joints to control the movements of your model.
3D packages provide a number of ways to create joints for your humanoid rig. These range from ready-made biped skeletons that you can scale to fit your mesh, right through to tools for individual bone creation and parenting to create your own bone structure. Although the details are outside the scope of Unity, here are some general guidelines:
- Study existing humanoid skeletons hierarchies (eg, bipeds) and where possible use or mimic the bone structure.
- Make sure the hips are the parent bone for your skeleton hierarchy.
- A minimum of fifteen bones are required in the skeleton.
- The joint/bone hierachy should follow a natural structure for the character you are creating. Given that arms and legs come in pairs, you should use a consistent convention for naming them (eg, "arm_L" for the left arm, "arm_R" for the right arm, etc). Possible hierarchies include:
- HIPS - spine - chest - shoulders - arm - forearm - hand
- HIPS - spine - chest - neck - head
- HIPS - UpLeg - Leg - foot - toe - toe_end
Skinning
This is the process of attaching the mesh to the skeleton
Skinning involves binding vertices in your mesh to bones, either directly (rigid bind) or with blended influence to a number of bones (soft bind). Different software packages use different methods, eg, assigning individual vertices and painting the weighting of influence per bone onto the mesh. The initial setup is typically automated, say by finding the nearest influence or using "heatmaps". Skinning usually requires a fair amount of work and testing with animations in order to ensure satisfactory results for the skin deformation. Some general guidelines for this process include:
- Using an automated process initially to set up some of the skinning (see relevant tutorials on 3DMax, Maya, etc.)
- Creating a simple animation for your rig or importing some animation data to act as a test for the skinning. This should give you a quick way to evaluate whether or not the skinning looks good in motion.
- Incrementally editing and refining your skinning solution.
- Sticking to a maximum of four influences when using a soft bind, since this is the maximum number that Unity will handle. If more than four influences affect part of the mesh then at least some information will be lost when playing the animation in Unity.
(back to AssetPreparationandImport)
(back to Mecanim introduction)
Page last updated: 2012-11-01Importing Animations
Before a character model can be used, it must first be imported into your project. Unity can import native Maya (.mb or .ma) and Cinema 4D (.c4d) files, and also generic FBX files which can be exported from most animation packages (see this page for further details on exporting). To import an animation, simply drag the model file to the Assets folder of your project. When you select the file in the Project View you can edit the in the inspector:-

The Import Settings Dialog for a mesh
See the FBX importer page for a full description of the available import options.
(back to Mecanim introduction)
Page last updated: 2012-11-02Splitting animations
An animated character typically has a number of different movements that are activated in the game in different circumstances. For example, it might need separate animations for walking, running, jumping, throwing, dying, etc. Depending on the way the model was animated, these separate movements might be imported as distinct animation clips or as one single clip where each movement simply follows on from the previous one. In cases where there is only a single clip, the clip must be split into its component animation sequences within Unity, which will involve some extra steps in your workflow.
Working with models that have pre-split animations
The simplest types of models to work with are those that contain pre-split animations. If you have an animation like that, the Animations tab in the Animation Importer Inspector will look like this:

You will see a list available clips which you can preview by pressing Play in the Preview Window (lower down in the inspector). The frame ranges of the clips can be edited, if needed.
Working with models that have unsplit animations
For models where the clips are supplied as one continuous animation, the Animation tab in the Animation Importer Inspector will look like this:

In cases like this, you can define the frame ranges that correspond to each of the separate animation sequences (walking, jumping, etc). You can create a new animation clip by pressing (+) and selecting the range of frames that are included in it.
For example:
- walk animation during frames 1 - 33
- run animation during frames 41 - 57
- kick animation during frames 81 - 97

The Import Settings Options for Animation
In the Import Settings, the Split Animations table is where you tell Unity which frames in your asset file make up which Animation Clip. The names you specify here are used to activate them in your game.
| name | Defines the Animation Clip's name within Unity. |
| Start | The first frame of the animation. The frame number refers to the same frame as in the 3D program used to create the animation. |
| End | The last frame of the animation. |
| Wrap Mode | Defines how should time beyond the playback range of the clip be treated (Once, Loop, PingPong, ClampForever). |
| Add Loop Frame | If enabled, an extra loop frame is inserted at the end of the animation. This frame matches the first frame in the clip. Use this if you want to make a looping animation and the first & last frames don't match up exactly. |
Working with animation clips for Mecanim animations.

| Lock Pose | Lock Pose |
| Lock Root Rotation | Lock Root Rotation |
| Lock Height | Lock Height |
| Lock Root Position | Lock root position |
| Rotation Offset | Rotation Offset |
| Cycle Offset | Cycle Offset |
| Mirror | Mirror |
| Body Mask | The parts of the body this animation clip affects |
| Curves | Parametric curves |
Adding animations to models that do not contain them
You can add animation clips to an Animation component even for models without muscle definitions (ie, non-Mecanim). You need to specify the default animation clip in the Animation property, and the available animation clips in the Animations property. The animation clips you add to such a non-Mecanim model should also be setup in a non-Mecanim way (ie, the Muscle Definition property should be set to None)
For models that have muscle definitions (Mecanim), the process is different:-
- Create a New Animator Controller
- Open the Animator Controller Window
- Drag the desired animation clip into the Animator Controller Window
- Drag the model asset into the Hierarchy.
- Add the animator controller to the Animator component of the asset.
Importing Animations using multiple model files
Another way to import animations is to follow a naming scheme that Unity allows for the animation files. You create separate model files and name them with the convention 'modelName@animationName.fbx'. For example, for a model called "goober", you could import separate idle, walk, jump and walljump animations using files named "goober@idle.fbx", "goober@walk.fbx", "goober@jump.fbx" and "goober@walljump.fbx". Only the animation data from these files will be used, even if the original files are exported with mesh data.

An example of four animation files for an animated character (note that the .fbx suffix is not shown within Unity)
Unity automatically imports all four files and collects all animations to the file without the @ sign in. In the example above, the goober.mb file will be set up to reference idle, jump, walk and wallJump automatically.
For FBX files, simply export a model file with no animation ticked (eg, goober.fbx) and the 4 clips as goober@animname.fbx by exporting the desired keyframes for each (enable animation in the FBX dialog).
(back to Mecanim introduction)
Page last updated: 2012-11-02Avatar Creation and Setup
The Mecanim Animation System is particularly well suited for working with animations for humanoid skeletons. Since humanoid skeletons are a very common special case and are used extensively in games, Unity provides a specialized workflow, and an extended tool set for humanoid animations.
Because of the similarity in bone structure, it is possible to map animations from one humanoid skeleton to another, allowing retargeting and inverse kinematics With rare exceptions, humanoid models can be expected to have the same basic structure, representing the major articulate parts of the body, head and limbs. The Mecanim system makes good use of this idea to simplify the rigging and control of animations. A fundamental step in creating a animation is to set up a mapping between the simplified humanoid bone structure understood by Mecanim and the actual bones present in the skeleton; in Mecanim terminology, this mapping is called an Avatar. The pages in this section explain how to create an Avatar for your model.
- Avatar作成
- Configuring the Avatar
- Muscle setup
- Avatar Body Mask
- Retargeting of Humanoid animations
- Inverse Kinematics (Pro only)
Creating the Avatar
FBXファイルをインポートした後、FBX importerオプションのRigタブでリグの指定をすることができます。
Humanoidアニメーション
Humanoidリグの場合、を選択しをクリックします。MecanimはAvatarのボーン構造に現在のボーン構造のマッチングを試みます。多くの場合、リグのボーンのつながりを正しく分析し、ほぼ自動作業となります。
マッチングが成功した場合は、メニューの横にチェックマークが表示されます。
また、マッチングが成功した場合には、FBXのアセットにAvatarの子アセットが追加されたことがプロジェクト・ビュー階層にて表示されます。
Avatarの子アセットがある場合とない場合のモデル
インスペクタ上のAvatarアセット
MecanimがAvatarを作成することができなかった場合は、ボタンの横に×印が表示され、アバターの子アセットが追加されません。これが起こるときには、Avatarを手動で設定する 必要があります。
!非Humanoid アニメーション
<<<<<<< HEAD 非Humanoidアニメーションのための2つのオプション(GenericおよびLegacy)が用意されています。GenericアニメーションはMecanimを使用してインポートが出来ますが、その際にHumanoidアニメーションで使用できるいくつかのすぐれた追加機能を利用できません。LegacyアニメーションはMecanim登場以前にUnityで提供されていたアニメーションシステムを使用しています。従来のアニメーションもでまだ有用なケースはありますが(特にあなたが完全にはアップデートしたくない過去プロジェクトを含む場合)、新規プロジェクトではほぼ必要ありません。Legacyアニメーションの詳細については、マニュアルのこのセクション を参照してください。 ======= 非ヒューマノイドアニメーションのための2つのオプション(GenericおよびLegacy)が用意されています。Genericアニメーションはメカニムを使用してインポートが出来ますが、その際にヒューマノイドアニメーションで使用できるいくつかのすぐれた追加機能を利用できません。Legacyアニメーションはメカニム登場以前にUnityで提供されていたアニメーションシステムを使用しています。従来のアニメーションもでまだ有用なケースはありますが(特にあなたが完全にはアップデートしたくない過去プロジェクトを含む場合)、新規プロジェクトではほぼ必要ありません。Legacyアニメーションの詳細については、マニュアルのこのセクション を参照してください。 > 7666ec2514acb2daf64e9ee61e3f7098c24d3470
(Avatar作成およびセットアップ に戻る)
(Mecanim紹介 に戻る)
Page last updated: 2012-11-26Configuring the Avatar
Since the Avatar is such an important aspect of the Mecanim system, it is important that it is configured properly for your model. So, whether the automatic Avatar creation fails or succeeds, you need to go into the mode to ensure your Avatar is valid and properly set up. It is important that your character's bone structure matches Mecanim's predefined bone structure and that the model is in T-pose.
If the automatic Avatar creation fails, you will see a cross next to the Configure button.
If it succeeds, you will see a check/tick mark:
Here, success simply means all of the required bones have been matched but for better results, you might want to match the optional bones as well and get the model into a proper T-pose.
When you go to the menu, the editor will ask you to save your scene. The reason for this is that in mode, the Scene View is used to display bone, muscle and animation information for the selected model alone, without displaying the rest of the scene.
Once you have saved the scene, you will see a new Avatar Configuration inspector, with a bone mapping.
The inspector shows which of the bones are required and which are optional - the optional ones can have their movements interpolated automatically. For Mecanim to produce a valid match, your skeleton needs to have at least the required bones in place. In order to improve your chances for finding a match to the Avatar, name your bones in a way that reflects the body parts they represent (names like "LeftArm", "RightForearm" are suitable here).
If the model does NOT yield a valid match, you can manually follow a similar process to the one used internally by Mecanim:-
- (try to get the model closer to the pose with which it was modelled, a sensible initial pose)
- (create a bone-mapping from an initial pose)
- (force the model closer to T-pose, which is the default pose used by Mecanim animations)
If the auto-mapping () fails completely or partially, you can assign bones by either draging them from the Scene or from the Hierarchy. If Mecanim thinks a bone fits, it will show up as green in the Avatar Inspector, otherwise it shows up in red.
Finally, if the bone assignment is correct, but the character is not in the correct pose, you will see the message "Character not in T-Pose". You can try to fix that with or rotate the remaining bones into T-pose.
ヒューマン テンプレートファイル
アバターにスケルトンのボーンマッピングはヒューマン テンプレートファイルとしてディスク上に保存することができます(拡張子*.ht)。このファイルにより、同じマッピングを使用する任意のキャラクターでボーンマッピングを再利用できます。たとえば、アニメーターで一貫性のあるレイアウトと一貫性のあるスケルトン命名規則を使用したものの、メカニムがそれを解釈できないケースに有効です。
各モデルで.htファイルをすることで、手動での再マッピングは一回だけで済ませることができる。
(back to Avatar Creation and Setup)
(back to Mecanim introduction)
Page last updated: 2012-11-06Muscle Definitions
Mecanim allows you to control the range of motion of different bones using Muscles.
Once the Avatar has been properly configured, Mecanim will "understand" the bone structure and allow you to start working in the Muscles tab of the Avatar Inspector. Here, it is very easy to tweak the character's range of motion and ensure the character deforms in a convincing way, free from visual artifacts or self-overlaps.

You can either adjust individual bones in the body (lower part of the view) or manipulate the character using predefined deformations which operate on several bones at once (upper part of the view).
Muscle Clips
In the Animation tab, you can set up Muscle Clips, which are animations for specific muscles and muscle groups.

You can also define which body parts these muscle clips apply to.

(back to Avatar Creation and Setup)
(back to Mecanim introduction)
Page last updated: 2012-11-02Avatar Body Mask
delete english ニメーションでBody Maskと呼ばれるものを使用して、特定の体の部分を選択的に有効または無効にすることができます。Body MaskはメッシュインポートインスペクタのAnimationタブとAnimation Layers で使用されています。 Body Maskによって、キャラクターの特定の要件に合わせてアニメーションを詳細にカスタマイズできます。たとえば、腕と脚の動きの両方を含む標準的な歩行アニメーションがあったとして、キャラクターが両手で大きな物体を運んでいる場合は、歩行中に腕が大きくスイングするのは不自然です。ただし、Body Maskで腕の動きをオフにすることで、標準の歩行アニメーションを活用することができます。
ボディパーツに含まれるのは、頭、左腕、右腕、左手、右手、左足、右足とルート(足の下の影部分)です。 Body Maskでは、手や足でインバースキネマティクス(IK)を切り替えることでIK曲線をアニメーションに含めるか決定することができます。
インスペクタ上の'Body Mask(腕を除く)
メッシュインポートインスペクタのアニメーションタブでは、Clipsというリストがあり、オブジェクトのすべてのアニメーションクリップが含まれています。このリストから項目を選択すると、Body Maskエディタを含め、アニメーションクリップに対して設定できるオプションが表示されます。
またBody Maskのアセット作成(メニューでを」選択)により.mask拡張子のファイルが作成されます。
Body Maskは、Animation Layers を指定する際にアニメータコントローラ で再利用することができます。
Body Maskを使用することの利点は、これらはアクティブではないボディパーツがそれに関連付けられたアニメーションカーブを必要としないため、メモリのオーバーヘッドを減少させやすい、ということです。さらに、未使用のアニメーションカーブは再生中に計算する必要がないためアニメーションによるCPUオーバーヘッドを削減しやすくなります。
(メカニム紹介 に戻る)
Page last updated: 2012-10-18Retargeting
One of the most powerful features of Mecanim is retargeting of humanoid animations. This means that with relative ease, users can apply the same set of animations to various character models. Retargeting is only possible for humanoid models, where an Avatar has been configured, because this gives us a correspondence between the models' bone structure.
Recommended Hierarchy structure
When working with Mecanim animations, you can expect your scene to contain the following elements:-
- The Imported character model, which has an Avatar on it.
- The Animator Component, referencing an Animator Controller asset.
- A set of animation clips, referenced from the Animator Controller.
- Scripts for the character.
- Character-related components, such as the Character Controller.
Your project should also contain another character model with a valid Avatar.
If in doubt about the terminology, please consult the Animation Glossary
The recommended setup is to:
- Create a GameObject in the Hierarchy that contains Character-related components
- Put the model as a child of the GameObject, together with the Animator component
- Make sure scripts referencing the Animator are looking for the animator in the children instead of the root (use
GetComponentInChildren<Animator>()instead ofGetComponent<Animator>())
Then in order to reuse the same animations on another model, you need to:
- Disable the original model
- Drop in the desired model as another child of GameObject
- Make sure the Animator Controller property for the new model is referencing the same controller asset
- Tweak the character controller the transform, and other properties on the top-level GameObject, to make sure that the animations work smoothly with the new model.
- You're done!
(back to Mecanim introduction)
Page last updated: 2012-11-07Inverse Kinematics
Most animation is produced by rotating the angles of joints in a skeleton to predetermined values. The position of a child joint changes according to the rotation of its parent and so the end point of a chain of joints can be determined from the angles and relative positions of the individual joints it contains. This method of posing a skeleton is known as forward kinematics.
However, it is often useful to look at the task of posing joints from the opposite point of view - given a chosen position in space, work backwards and find a valid way of orienting the joints so that the end point lands at that position. This can be useful when you want a character to touch an object at a point selected by the user or plant its feet convincingly on an uneven surface. This approach is known as Inverse Kinematics (IK) and is supported in Mecanim for any humanoid character with a correctly configured Avatar.

To set up IK for a character, you typically have objects around the scene that a character interacts with, and then set up the IK thru script, in particular, Animator functions like SetIKPositionWeight, SetIKRotationWeight, SetIKPosition, SetIKRotation, SetLookAtPosition, bodyPosition, bodyRotation
In the illustration above, we show a character grabbing a cylindrical object. How do we make this happen?
We start out with a character that has a valid Avatar, and attach to it a script that actually takes care of the IK, let's call it IKCtrl:
using UnityEngine;
using System;
using System.Collections;
[RequireComponent(typeof(Animator))]
public class IKCtrl : MonoBehaviour {
protected Animator animator;
public bool ikActive = false;
public Transform rightHandObj = null;
void Start ()
{
animator = GetComponent<Animator>();
}
//a callback for calculating IK
void OnAnimatorIK()
{
if(animator) {
//if the IK is active, set the position and rotation directly to the goal.
if(ikActive) {
//weight = 1.0 for the right hand means position and rotation will be at the IK goal (the place the character wants to grab)
animator.SetIKPositionWeight(AvatarIKGoal.RightHand,1.0f);
animator.SetIKRotationWeight(AvatarIKGoal.RightHand,1.0f);
//set the position and the rotation of the right hand where the external object is
if(rightHandObj != null) {
animator.SetIKPosition(AvatarIKGoal.RightHand,rightHandObj.position);
animator.SetIKRotation(AvatarIKGoal.RightHand,rightHandObj.rotation);
}
}
//if the IK is not active, set the position and rotation of the hand back to the original position
else {
animator.SetIKPositionWeight(AvatarIKGoal.RightHand,0);
animator.SetIKRotationWeight(AvatarIKGoal.RightHand,0);
}
}
}
}
As we do not intend for the character to grab the entire object with his hand, we position a sphere where the hand should be on the cylinder, and rotate it accordingly.
This sphere should then be placed as the "Right Hand Obj" property of the IKCtrl script

Observe the character grabbing and ungrabbing the object as you click the IKActive checkbox
(back to Mecanim introduction)
Page last updated: 2012-11-07Generic Animations
The full power of Mecanim is most evident when you are working with humanoid animations. However, non-humanoid animations are also supported although without the avatar system and other features. In Mecanim terminology, non-humanoid animations are referred to as Generic Animations.
To start working with a generic skeleton, go to the Rig tab in the FBX importer and choose Generic from the Animation Type menu.

Root node in generic animations
While in the case of humanoid animations, we have the knowledge about the center of mass and orientation, in the case of Generic animations, the skeleton can be arbitrary, and we need to specify a reference bone, or the "root node". Selecting the root node allows us to establish correspondence between animation clips for a generic model, and blend properly between animations that are not "in place". The root node is also essential for separating animation of bones relative to reach other and motion of the root in the world (controlled from OnAnimatorMove)
Bringing characters to life
- Looping animation clips
- Animator Component and Animator Controller
- Animation State Machines
- Blend Trees
- Mecanim Advanced topics:
Looping Animation Clips
A common operation for people working with animations is to make sure they loop properly. It is important, for example, that the animation clip representing the walk cycle, begins and ends in a similar pose (e.g. left foot on the ground), to ensure there is no foot sliding, or strange jerky motions. Mecanim provides convenient tools for this. Animation clips can loop based on pose, rotation, and position.
If you drag the Start or End points of the animation clip, you will see the Looping fitness curves for all of the paramers based on which it is possible to loop. If you place the Start / End marker in a place where the curve for the property is green, it is more likely that the clip can loop properly. The loop match indicator will show how good the looping is for the selected ranges.

Clip ranges with bad match for Loop Pose

Clip ranges with good match for Loop Pose
Once the loop match indicator is green, Enabling Loop Pose (for example) will make sure the looping of the pose is artifact-free.
For more details on animation clip options, see Animation Clip reference
(back to Mecanim introduction)
Animator Component and Window
アニメーター コンポーネント
アバターを持っている任意のゲームオブジェクトは’’アニメーターコンポーネント’’を持つことになり、またキャラクターとその動作の関係が定義されます。

アニメーターコンポーネントはアニメーターコントローラを参照し、これによりキャラクタの動作を設定します。ステートマシン , ブレンドツリー およびスクリプトから制御可能なイベントも設定できます。
プロパティ
| Controller | このキャラクターにアタッチされたアニメーターコントローラ |
| Avatar | このキャラクターのアバター |
| Apply Root Motion | キャラクターの位置をアニメーション自体から制御するかスクリプトから制御するかどうか |
| Animate Physics | アニメーションが物理挙動と連動する必要があるか |
| Culling Mode | アニメーションのカリングモード |
| Always animate | つねにアニメーションし、カリングを行わない |
| Based on Renderers | レンダラが非表示の場合ルートモーションのみがアニメーションされます。キャラクターが非表示の場合、体の他の部分はすべてスタティックとなります。 |
Animator Controller
Animator Controllerビューからキャラクター動作を表示、セットアップすることができます。(メニューからを選択)
Animator ControllerはProject Viewから作成することができます。(メニューからを選択)
これにより .controllerアセットがディスク上に作成され、Project Browserで次のように表示されます。

ディスク上のAnimator Controllerアセット
ステートマシンのセットアップが行われた後、Hierarchy ViewでAvatarで任意のキャラクターのAnimatorコンポーネント上にコントローラをドラッグ&ドロップすることができます。

Animator Controllerウィンドウは以下を含みます:
- Animation Layerウィジェット (左上隅、Animation Layers を参照)
- Event Parameters ウィジェット (左下、 Animation Parameters を参照)
- ステートマシン自体 の表示。
Animator Controller ウィンドウは現在読み込まれているかのシーンが何であるかにかかわらず、常に最近選択された.controllerアセットからステートマシンを表示することに注意して下さい。
(back to Mecanim introduction)
Page last updated: 2012-10-18Animation State Machines
It is common for a character to have several different animations that correspond to different actions it can perform in the game. For example, it may breathe or sway slightly while idle, walk when commanded to and raise its arms in panic as it falls from a platform. Controlling when these animations are played back is potentially quite a complicated scripting task. Mecanim borrows a computer science concept known as a state machine to simplify the control and sequencing of a character's animations.
State machine basics
The basic idea is that a character is engaged in some particular kind of action at any given time. The actions available will depend on the type of gameplay but typical actions include things like idling, walking, running, jumping, etc. These actions are referred to as states, in the sense that the character is in a "state" where it is walking, idling or whatever. In general, the character will have restrictions on the next state it can go to rather than being able to switch immediately from any state to any other. For example, a running jump can only be taken when the character is already running and not when it is at a standstill, so it should never switch straight from the idle state to the running jump state. The options for the next state that a character can enter from its current state are referred to as state transitions. Taken together, the set of states, the set of transitions and the variable to remember the current state form a state machine.
The states and transitions of a state machine can be represented using a graph diagram, where the nodes represent the states and the arcs (arrows between nodes) represent the transitions. You can think of the current state as being a marker or highlight that is placed on one of the nodes and can then only jump to another node along one of the arrows.

The importance of state machines for animation is that they can be designed and updated quite easily with relatively little coding. Each state has an animation sequence associated with it that will play whenever the machine is in that state. This enables an animator or designer to define the possible sequences of character actions and animations without being concerned about how the code will work.
Mecanim state machines
Mecanim's Animation State Machines provide a way to overview all of the animation clips related to a particular character and allow various events in the game (for example user input) to trigger different animations.
Animation State Machines can be set up from the Animator Controller Window, and they look something like this:

State Machines consist of States, Transitions and Events and smaller Sub-State Machines can be used as components in larger machines.
(back to Mecanim introduction)
Page last updated: 2012-11-02Animation States
Animation State
Animation StateはAnimation State Machinesの基本構成要素です。各ステート(状態)は、個々のアニメーションシーケンス(またはブレンドツリー)が含まれていて、キャラクターがそのステートの時に再生されます。ゲーム内のイベントで、ステート遷移をトリガすると、キャラクターは新しいステートにに移行し、対応するアニメーションシーケンスに動作が遷移します。
アニメーターコントローラーのステートを選択すると、インスペクタ上で、そのステートに対応するプロパティが表示されます。:-

| Speed | アニメーションのデフォルトの速度 |
| Motion | ステートに割り当てられているアニメーションクリップ |
| Foot IK | ステートで足のIKを有効にするか |
| Transitions | ステートの遷移先ステート一覧 |
茶色で表示されるデフォルトのステートは、最初に起動されたときのステートです。デフォルトの状態を変更したい場合は、別のステート上で右クリックし、コンテキストメニューからを選択します。各遷移上soloおよびmuteのチェックボックスはAnimation Viewの動作を制御するために使用されています。詳細はこのページ を参照のこと。
新しいステートの追加時はがAnimator Controller Windowのどこかを右クリックし、コンテキストメニューでを選択します。別の方法としては、AnimatorControllerWindowにアニメーションをドラッグすることで、そのアニメーションを含むステートを作成することが出来ます。(コントローラーにはメカニムアニメーションをドラッグできることに留意してください。 非メカニムアニメーションはリジェクトされます。)ステートはブレンドツリー を含みます。
Any State
Any Stateは常駐している特殊なステートです。現在どのステートにいるかに影響を受けることなく、特定のステートに遷移したい場合のために存在している。これは、全ステートに同じ遷移先を追加するのと同じ効果がある。Any Stateは、その特殊の機能により、ステートの遷移先とすることはできません。(次の遷移先としてランダムなステートを選択するための手段としてはAny Stateは使用できませんので留意下さい。)

(Animation State Machines に戻る)
Page last updated: 2012-10-18Animation Transitions
Animation Transitions
Animation Transitionsは、あるAnimation Stateから別のものに切り替えたときに何が起こるかを定義する。任意の時点でアクティブなAnimation Transitionsはひとつのみです。
| Atomic | 遷移がアトミックか(中断が出来ない) |
| Conditions | いつ遷移がトリガーされるか |
Conditionは2つの部分から成ります:
- 条件述語 (If, If Not, Less, Greater, Equals, Not Equal, および Exit Time)
- イベントパラメータ(IfとIf Notでbool型と連動、Exit Timeはtime型を使用)。
- パラメータ値(必要な場合)
2つのアニメーションクリップ間の遷移は、開始値と終了値をドラッグすることによって、重なりを調整することができます。

(Animation State Machines にもどる)
Page last updated: 2012-10-18Animation Parameters
Animation Parameters expose the operation of the state machine to game logic. Events are triggered based on event parameters, activated from game logic. Typically you would work with events in 3 places:
- Setting up parameters in the Parameter Widget in the bottom-left corner of the Animator Controller Window
- Setting up conditions for transitions in the Transition Inspector, based on those parameters
- Controlling the parameters from script.

Event parameters can be of 4 basic types: Vector, Float, Int, and Bool, and they can be controlled from script via the functions SetVector, SetFloat, SetInt, and SetBool respectively.
Note that the values next to the parameters serve as Check default values for those parameters at startup, unless they're overriden by (or blended with) values from animation curves
Thus, a complete animated character in the scene will have both an Animator Component and a script that controls the parameters in the Animator.

Here's an example of a script that modifies event parameters based on user input
public class AvatarCtrl : MonoBehaviour {
protected Animator animator;
public float DirectionDampTime = .25f;
void Start ()
{
animator = GetComponent<Animator>();
}
void Update ()
{
if(animator)
{
//get the current state
AnimatorStateInfo stateInfo = animator.GetCurrentAnimatorStateInfo(0);
//if we're in "Run" mode, respond to input for jump, and set the Jump parameter accordingly.
if(stateInfo.name == "Base Layer.RunBT")
{
if(Input.GetButton("Fire1"))
animator.SetBool("Jump", true );
}
else
{
animator.SetBool("Jump", false);
}
float h = Input.GetAxis("Horizontal");
float v = Input.GetAxis("Vertical");
//set event parameters based on user input
animator.SetFloat("Speed", h*h+v*v);
animator.SetFloat("Direction", h, DirectionDampTime, Time.deltaTime);
}
}
}
(back to Animation State Machines)
Page last updated: 2012-10-26Animation Blend Trees
Blend Trees are used for continuous blending between similar animations based on float event parameters. A typical example of this is blending between walk and run animations based on the speed parameter. Mecanim can ensure that the transition between the walk and the run is smooth (it is important that the animation clips are aligned: e.g. start with the left foot on the floor at 0.0, and have the right foot on the floor at 0.5 for both. Another typical example is a transition between RunLeft, Run and RunRight animations based on the direction parameter value between 0.0 (left) and 1.0 (right).
To start working with a new blend tree, you need to:
- Right-click on empty space on the Animator Controller Window
- Select .
- Double-click on the Blend Tree to bring up the Blend Tree Inspector.

In the inspector, the first thing you need is to select the Animation Parameter that will control this Blend Tree.
Then you can add individual animations by clicking to add an animation clip to the blend tree. When you're done, it should look something like this:

The red vertical bar indicates the current value of the event parameter. You can preview what happens to the animations by pressing in the Animation Preview Window, dragging the bar left and right.
(back to Mecanim introduction)
Page last updated: 2012-11-06Advanced topics
- Working with Animation Curves in Mecanim (Pro only)
- Nested State Machines
- Animation Layers
- Animation State Machine Preview (solo and mute)
- Target Matching
- Root Motion - how it works
(back to Mecanim introduction)
Page last updated: 2012-10-08Animation Curves in Mecanim
Animation curves can be attached to animation clips in the Animations tab of the .

The curves on animation clips in Mecanim
The curve's X-axis represents normalized time and always ranges between 0.0 and 1.0 (corresponding to the beginning and the end of the animation clip respectively, regardless of its duration).
Double-clicking an animation curve will bring up the standard Unity curve editor (see this page for further details) which you can use to add keys to the curve. Keys are points along the curve's timeline where it has a value explicitly set by the animator rather than just using an interpolated value. Keys are very useful for marking important points along the timeline of the animation. For example, with a walking animation, you might use keys to mark the points where the left foot is on the ground, then both feet on the ground, right foot on the ground, etc. Once the keys are set up, you can move conveniently between key frames by pressing the buttons. This will move the vertical red line and show the normalized time at the keyframe; the value you enter in the text box will then set the value of the curve at that time.
Animation Curves and Animator Controller parameters
If you have a curve with the same name as one of the parameters in the Animator Controller, then that parameter will take its value from the value of the curve at each point in the timeline. For example, if you make a call to GetFloat from a script, the returned value will be equal to the value of the curve at the time the call is made. Note that at any given point in time, there might be multiple animation clips attempting to set the same parameter from the same controller. In that case, the curve values from the multiple animation clips are blended. If an animation has no curve for a particular parameter then the blending will be done with the default value for that parameter.
(back to Mecanim introduction)
Page last updated: 2012-11-07Nested State Machines
For convenience, it is possible to nest Animation State Machines within other Animation State Machines. You can create a Sub-state machine by rightclicking on an empty space within the Animator Controller window and selecting .

This forms a sub-state machine, to which you can navigate by double-clicking on the rhombic node:

Note, however, that you can only connect from states to other states. Thus when you create a transition from a state to a state machine, Unity will ask you to select a state from that machine. You can connect both up and down the hierarchy.

The State Inspector and the Transition Inspector will indicate which state machine each state comes from:

(back to State Machines introduction)
(back to Mecanim introduction)
Page last updated: 2012-10-08Animation Layers
Unity uses Animation Layers for managing complex state machines for different body parts. An example of this is if you have a lower-body layer for walking-jumping, and an upper-body layer for throwing objects / shooting.
You can manage animation layers from the Layers Widget in the top-left corner of the Animator Controller.

You can add a new layer by pressing the on the widget. On each layer, you can specify the body mask (the part of the body on which the animation would be applied), and the Blending type. Override means information from other layers will be ignored, while Additive means that the animation will be added on top of previous layers.
The Mask property is there to specify the body mask used on this layer. For example if you want to use upper body throwing animations, while having your character walk or run, you would use an upper body mask, like this:

For more on Avatar Body Masks, you can read this section
Animation Layer syncing (Pro only)
Sometimes it is useful to be able to re-use the same state machine in different layers. For example if you want to simulate "wounded" behavior, and have "wounded" animations for walk / run / jump instead of the "healthy" ones. You can click the checkbox on one of your layers, and then select the layer you want to sync with. The state machine structure will then be the same, but the actual animation clips used by the states will be distinct.

(back to Mecanim introduction)
Page last updated: 2012-11-07Animation State Machine Preview (solo and mute)
Solo and Mute functionality
In complex state machines, it is useful to preview the operation of some parts of the machine separately. For this, you can use the Mute / Solo functionality. Muting means a transition will be disabled. Soloed transtions are enabled and with respect to other transitions originating from the same state. You can set up mute and solo states either from the Transition Inspector, or the State Inspector (recommended), where you'll have an overview of all the transitions from that state.
-0.jpg)
Soloed transitions will be shown in green, while muted transitions in red, like this:
-1.jpg)
In the example above, if you are in State 0, only transitions to State A and State B will be available.
- The basic rule of thumb is that if one Solo is ticked, the rest of the transitions from that state will be muted.
- If both Solo and Mute are ticked, then Mute takes precedence.
Known issues:
- The controller graph currently doesn't always reflect the internal mute states of the engine.
(back to State Machines introduction)
(back to Mecanim introduction)
Page last updated: 2012-10-08Target Matching
Often in games, a situation arises where a character must move in such a way that a hand or foot lands at a certain place at a certain time. For example, the character may need to jump across stepping stones or jump and grab an overhead beam.
You can use the Animator.MatchTarget function to handle this kind of situation. Say, for example, you want to arrange an situation where the character jumps onto a platform and you already have an animation clip for it called Jump Up. To do this, follow the steps below.
- Find the place in the animation clip at which the character is beginning to get off the ground, note in this case it is 11.0% or 0.11 into the animation clip in normalized time.

- Find the place in the animation clip at which the character is about to land on his feet, note in this case the value is 22.3% or 0.223.

- Create a script (
TargetCtrl.cs) that makes a call to MatchTarget, like this:
using UnityEngine;
using System;
[RequireComponent(typeof(Animator))]
public class TargetCtrl : MonoBehaviour {
protected Animator animator;
//the platform object in the scene
public Transform jumpTarget = null;
void Start () {
animator = GetComponent<Animator>();
}
void Update () {
if(animator) {
if(Input.GetButton("Fire1"))
animator.MatchTarget(jumpTarget.position, jumpTarget.rotation, AvatarTarget.LeftFoot, new MatchTargetWeightMask(Vector3.one, 1f), 0.11f, 0.223f);
}
}
}
Attach that script onto the Mecanim model.

The script will move the character so that it jumps from its current position and lands with its left foot at the target. Bear in mind that the result of using MatchTarget will generally only make sense if it is called at the right point in gameplay.
(back to Mecanim introduction)
Page last updated: 2012-11-07Root Motion
Attach:MecanimRootMotionPreview.png Δ
Body Transform
You must start by computing a body transform that will be the same for all humanoid characters (from a retargeting standpoint). Use the body mass center as the body position. The body orientation is an average of the lower and upper body orientation. Body orientation is at identity for the Avatar T-Pose.
The body position and orientation are stored in the Animation Clip (using the Muscle definitions set up in the Avatar). They are the only world-space curves stored in the Animation Clip. Everything else: muscle curves and IK goals (Hands and Feet) are stored relative to the body transform.
For example, the Hips (or Pelvis, etc.) are usually used to store the world-space position and orientation of the animation. In a straight walk or run animation, the Hips will swing and twist left/right, up/down, but the center of mass will nearly follow a straight line. The average of the lower and upper body orientations will also be more stable, almost constant. The position of the hips will be different from one skeleton to another, depending on how they were modeled (sometimes in the middle of left and right hips, sometimes offset back or up). Its orientation will also be totally arbitrary. it is not a good choice to use it as world space transform for retargeting. Look at how the body transform behaves for a barrel jump where Hips totally fail!
Root Transform
The Root Transform is a projection on the Y plane of the Body Transform and is computed at runtime. At every Animator update, a delta Root Transform is computed for the current delta time. The delta transform is then apply to the Game Object to make it move.
The Animation Clip Editor settings (Root Transform Rotation, Root Transform Position (Y) and Root Transform Position (XZ)) let you control the Root Transform projection from the Body Transform. Depending on these settings some parts of the Body Transform may be transferred Root Transform. For example you can decide if you want the motion Y position to be part of the Root Motion (trajectory) or part of the pose (body transform), which is known as Baked into Pose.

Root Transform Rotation
Bake into Pose: The orientation will stay on the body transform (or Pose). The Root Orientation will be constant and delta Orientation will be identity. This means the the Game Object will not be rotated at all by that AnimationClip.
Only AnimationClips that have similar start and stop Root Orientation should use this option. You will have a Green Light in the UI telling you that an AnimationClip is a good candidate. A suitable candidate would be a straight walk or a run.
Based Upon: This let you set the orientation of the clip. Using , the clip will be oriented to follow the forward vector of body. This default setting works well for most Motion Capture (Mocap) data like walks, runs, and jumps, but it will fail with motion like strafing where the motion is perpendicular to the body's forward vector. In those cases you can manually adjust the orientation using the setting. Finally you have that will automatically add the authored offset found in the imported clip. It is usually used with Keyframed data to respect orientation that was set by the artist.
Offset: used to enter the offset when that option is chosen for Based Upon.
Root Transform Position (Y)
This uses the same concept describe in Root Transform Rotation.
Bake Into Pose: The Y component of the motion will stay on the Body Transform (Pose). The Y component of the Root Transform will be constant and Delta Root Position Y will be 0. This means that this clip wont change the Game Object Height. Again you have a Green Light telling you that a clip is a good candidate for baking Y motion into pose.
Most of the AnimationClips will enable this setting. Only clips that will change the GameObject height should have this turned off, like jump up or down.
Animator.gravityWeight is driven by Bake Into Pose position Y. When enabled, gravityWeight = 1, when disable = 0. gravityWeight is blended for clips when transitioning between states.Based Upon: In a similar way to Root Transform Rotation you can choose from or . There is also a option that is very convenient for AnimationClips that change height (Bake Into Pose disabled). When using the Root Transform Position Y will match the lowest foot Y for all frames. Thus the blending point always remains around the feet which prevents floating problem when blending or transitioning.
Offset: In a similar way to Root Transform Rotation, you can manually adjust the AnimationClip height using the Offset setting.
Root Transform Position (XZ)
Again, this uses same concept describe in Root Transform Rotation or Root Motion Position (Y).
Bake Into Pose will usually be used for Idles where you want to force the delta Position (XZ) to be 0. It will stop the accumulation of small deltas drifting after many evaluations. It can also be used for a Keyframed clip with Based Upon to force an authored position that was set by the artist.
Loop Pose
Loop Pose (like Pose Blending in Blend Trees or Transitions) happens in the referential of Root Transform. Once the Root Transform is computed, the Pose becomes relative to it. The relative Pose difference between Start and Stop frame is computed and distributed over the range of the clip from 0-100%.
Generic Root Motion and Loop Pose.
This works in essentially the same as Humanoid Root Motion, but instead of using the Body Transform to compute/project a Root Transform, the transfrom set in Root Node is used. The Pose (all the bones which transform below the Root Motion bone) is made relative to the Root Transform.
Page last updated: 2012-11-07Scripting Root Motion
Sometimes your animation comes as "in-place", which means if you put it in a scene, it will not move the character that it's on. In other words, the animation does not contain "root motion". For this, we can modify root motion from script. To put everything together follow the steps below (note there are many variations of achieving the same result, this is just one recipe).
- Open the inspector for the FBX file that contains the in-place animation, and go to the Animation tab
- Make sure the is set to the Avatar you intend to control (let's say this avatar is called Dude, and he has already been added to the Hierarchy View).
- Select the animation clip from the available clips
- Make sure Loop Pose is properly aligned (the light next to it is green), and that the checkbox for Loop Pose is clicked

- Preview the animation in the animation viewer to make sure the beginning and the end of the animation align smoothly, and that the character is moving "in-place"*On the animation clip create a curve that will control the speed of the character (you can add a curve from the Animation Import inspector )
- Name that curve something meaningful, like "Runspeed"

- Create a new Animator Controller, (let's call it RootMotionController)
- Drop the desired animation clip into it, this should create a state with the name of the animation (say Run)
- Add a parameter to the Controller with the same name as the curve (in this case, "Runspeed")

- Select the character Dude in the Hierarchy, whose inspector should already have an Animator component.
- Drag RootMotionController onto the Controller property of the Animator
- If you press play now, you should see the "Dude" running in place
- Finally, to control the motion, we will need to create a script (RootMotionScript.cs), that implements the
OnAnimatorMovecallback.
using UnityEngine;
using System.Collections;
[RequireComponent(typeof(Animator))]
public class RootMotionScript : MonoBehaviour {
void OnAnimatorMove()
{
Animator animator = GetComponent<Animator>();
if (animator)
{
Vector3 newPosition = transform.position;
newPosition.z += animator.GetFloat("Runspeed") * Time.deltaTime;
transform.position = newPosition;
}
}
}
- Attach RootMotionScript.cs to "Dude"
- Note that the Animator component detects there is a script with
OnAnimatorMoveand Apply Root Motion property shows up as Handled by Script

- Now you should see that the character is moving at the speed specified.
(back to Mecanim introduction)
Page last updated: 2012-11-06Legacy Animation system
Unity's Animation System allows you to create beautifully animated skinned characters. The Animation System supports animation blending, mixing, additive animations, walk cycle time synchronization, animation layers, control over all aspects of the animation playback (time, speed, blend-weights), mesh skinning with 1, 2 or 4 bones per vertex and finally physically based ragdolls.
For best practices on creating a rigged character with optimal performance in Unity, we recommended that you check out the section on Modeling Optimized Characters.
The following topics are covered on this page:
Importing Inverse Kinematics
When importing animated characters from Maya that are created using IK, you have to check the Bake IK & simulation box in the Import Settings. Otherwise, your character will not animate correctly.
Bringing the character into the Scene
When you have imported your model you drag the object from the Project View into the Scene View or Hierarchy View

The animated character is added by dragging it into the scene
The character above has three animations in the animation list and no default animation. You can add more animations to the character by dragging animation clips from the Project View on to the character (in either the Hierarchy or Scene View). This will also set the default animation. When you hit Play, the default animation will be played.
Animation Editor Guide (Legacy)
The Animation View in Unity allows you to create and modify Animation Clips directly inside Unity. It is designed to act as a powerful and straightforward alternative to external 3D animation programs. In addition to animating movement, the editor also allows you to animate variables of materials and components and augment your Animation Clips with Animation Events, functions that are called at specified points along the timeline.
See the pages about Animation import and Animation Scripting for further information about these subject.
The Animation View Guide is broken up into several pages that each focus on different areas of the View:-
Using the Animation View
This section covers the basic operations of the Animation View, such as creating and editing Animations Clips.
Using Animation Curves
This section explains how to create Animation Curves, add and move keyframes and set WrapModes. It also offers tips for using Animation Curves to their full advantage.
Editing Curves
This section explains how to navigate efficienlty in the editor, create and move keys, and edit tangents and tangent types.
Objects with Multiple Moving Parts
This section explains how to animate Game Objects with multiple moving parts and how to handle cases where there is more than one Animation Component that can control the selected Game Object.
Using Animation Events
This section explains how to add Animation Events to an Animation Clip. Animation Events allow you call a script function at specified points in the animation's timeline.
Page last updated: 2012-09-10Animation Scripting (Legacy)
Unity's Animation System allows you to create beautifully animated skinned characters. The Animation System supports animation blending, mixing, additive animations, walk cycle time synchronization, animation layers, control over all aspects of the animation playback (time, speed, blend-weights), mesh skinning with 1, 2 or 4 bones per vertex as well as supporting physically based rag-dolls and procedural animation. To obtain the best results, it is recommended that you read about the best practices and techniques for creating a rigged character with optimal performance in Unity on the Modeling Optimized Characters page.
Making an animated character involves two things; moving it through the world and animating it accordingly. If you want to learn more about moving characters around, take a look at the Character Controller page. This page focuses on the animation. The actual animating of characters is done through Unity's scripting interface.
You can download example demos showing pre-setup animated characters. Once you have learned the basics on this page you can also see the animation script interface.
This page contains the following sections:-
- Animation Blending
- Animation Layers
- Animation Mixing
- Additive Animation
- Procedural Animation
- Animation Playback and Sampling
Animation Blending
In today's games, animation blending is an essential feature to ensure that characters have smooth animations. Animators create separate animations, for example, a walk cycle, run cycle, idle animation or shoot animation. At any point in time during your game you need to be able to transition from the idle animation into the walk cycle and vice versa. Naturally, you want the transition to be smooth and avoid sudden jerks in the motion.
This is where animation blending comes in. In Unity you can have any number of animations playing on the same character. All animations are blended or added together to generate the final animation.
Our first step will be to make a character blend smoothly between the idle and walk animations. In order to make the scripter's job easier, we will first set the Wrap Mode of the animation to Loop. Then we will turn off Play Automatically to make sure our script is the only one playing animations.
Our first script for animating the character is quite simple; we only need some way to detect how fast our character is moving, and then fade between the walk and idle animations. For this simple test, we will use the standard input axes:-
function Update () {
if (Input.GetAxis("Vertical") > 0.2)
animation.CrossFade ("walk");
else
animation.CrossFade ("idle");
}
To use this script in your project:-
- Create a Javascript file using .
- Copy and paste the code into it
- Drag the script onto the character (it needs to be attached to the GameObject that has the animation)
When you hit the Play button, the character will start walking in place when you hold the up arrow key and return to the idle pose when you release it.
Animation Layers
Layers are an incredibly useful concept that allow you to group animations and prioritize weighting.
Unity's animation system can blend between as many animation clips as you want. You can assign blend weights manually or simply use animation.CrossFade(), which will animate the weight automatically.
Blend weights are always normalized before being applied
Let's say you have a walk cycle and a run cycle, both having a weight of 1 (100%). When Unity generates the final animation, it will normalize the weights, which means the walk cycle will contribute 50% to the animation and the run cycle will also contribute 50%.
However, you will generally want to prioritize which animation receives most weight when there are two animations playing. It is certainly possible to ensure that the weight sums up to 100% manually, but it is easier just to use layers for this purpose.
Layering Example
As an example, you might have a shoot animation, an idle and a walk cycle. The walk and idle animations would be blended based on the player's speed but when the player shoots, you would want to show only the shoot animation. Thus, the shoot animation essentially has a higher priority.
The easiest way to do this is to simply keep playing the walk and idle animations while shooting. To do this, we need to make sure that the shoot animation is in a higher layer than the idle and walk animations, which means the shoot animation will receive blend weights first. The walk and idle animations will receive weights only if the shoot animation doesn't use all 100% of the blend weighting. So, when CrossFading the shoot animation in, the weight will start out at zero and over a short period become 100%. In the beginning the walk and idle layer will still receive blend weights but when the shoot animation is completely faded in, they will receive no weights at all. This is exactly what we need!
function Start () {
// Set all animations to loop
animation.wrapMode = WrapMode.Loop;
// except shooting
animation["shoot"].wrapMode = WrapMode.Once;
// Put idle and walk into lower layers (The default layer is always 0)
// This will do two things
// - Since shoot and idle/walk are in different layers they will not affect
// each other's playback when calling CrossFade.
// - Since shoot is in a higher layer, the animation will replace idle/walk
// animations when faded in.
animation["shoot"].layer = 1;
// Stop animations that are already playing
//(In case user forgot to disable play automatically)
animation.Stop();
}
function Update () {
// Based on the key that is pressed,
// play the walk animation or the idle animation
if (Mathf.Abs(Input.GetAxis("Vertical")) > 0.1)
animation.CrossFade("walk");
else
animation.CrossFade("idle");
// Shoot
if (Input.GetButtonDown ("Fire1"))
animation.CrossFade("shoot");
}
By default the animation.Play() and animation.CrossFade() will stop or fade out animations that are in the same layer. This is exactly what we want in most cases. In our shoot, idle, run example, playing idle and run will not affect the shoot animation and vice versa (you can change this behavior with an optional parameter to animation.CrossFade if you like).
Animation Mixing
Animation mixing allow you to cut down on the number of animations you need to create for your game by having some animations apply to part of the body only. This means such animations can be used together with other animations in various combinations.
You add an animation mixing transform to an animation by calling AddMixingTransform() on the given AnimationState.
Mixing Example
An example of mixing might be something like a hand-waving animation. You might want to make the hand wave either when the character is idle or when it is walking. Without animation mixing you would have to create separate hand waving animations for the idle and walking states. However, if you add the shoulder transform as a mixing transform to the hand waving animation, the hand waving animation will have full control only from the shoulder joint to the hand. Since the rest of the body will not be affected by he hand-waving, it will continue playing the idle or walk animation. Consequently, only the one animation is needed to make the hand wave while the rest of the body is using the idle or walk animation.
/// Adds a mixing transform using a Transform variable var shoulder : Transform; animation["wave_hand"].AddMixingTransform(shoulder);
Another example using a path.
function Start () {
// Adds a mixing transform using a path instead
var mixTransform : Transform = transform.Find("root/upper_body/left_shoulder");
animation["wave_hand"].AddMixingTransform(mixTransform);
}
Additive Animations
Additive animations and animation mixing allow you to cut down on the number of animations you have to create for your game, and are important for creating facial animations.
Suppose you want to create a character that leans to the sides as it turns while walking and running. This leads to four combinations (walk-lean-left, walk-lean-right, run-lean-left, run-lean-right), each of which needs an animation. Creating a separate animation for each combination clearly leads to a lot of extra work even in this simple case but the number of combinations increases dramatically with each additional action. Fortunately additive animation and mixing avoids the need to produce separate animations for combinations of simple movements.
Additive Animation Example
Additive animations allow you to overlay the effects of one animation on top of any others that may be playing. When generating additive animations, Unity will calculate the difference between the first frame in the animation clip and the current frame. Then it will apply this difference on top of all other playing animations.
Referring to the previous example, you could make animations to lean right and left and Unity would be able to superimpose these on the walk, idle or run cycle. This could be achieved with code like the following:-
private var leanLeft : AnimationState;
private var leanRight : AnimationState;
function Start () {
leanLeft = animation["leanLeft"];
leanRight = animation["leanRight"];
// Put the leaning animation in a separate layer
// So that other calls to CrossFade won't affect it.
leanLeft.layer = 10;
leanRight.layer = 10;
// Set the lean animation to be additive
leanLeft.blendMode = AnimationBlendMode.Additive;
leanRight.blendMode = AnimationBlendMode.Additive;
// Set the lean animation ClampForever
// With ClampForever animations will not stop
// automatically when reaching the end of the clip
leanLeft.wrapMode = WrapMode.ClampForever;
leanRight.wrapMode = WrapMode.ClampForever;
// Enable the animation and fade it in completely
// We don't use animation.Play here because we manually adjust the time
// in the Update function.
// Instead we just enable the animation and set it to full weight
leanRight.enabled = true;
leanLeft.enabled = true;
leanRight.weight = 1.0;
leanLeft.weight = 1.0;
// For testing just play "walk" animation and loop it
animation["walk"].wrapMode = WrapMode.Loop;
animation.Play("walk");
}
// Every frame just set the normalized time
// based on how much lean we want to apply
function Update () {
var lean = Input.GetAxis("Horizontal");
// normalizedTime is 0 at the first frame and 1 at the last frame in the clip
leanLeft.normalizedTime = -lean;
leanRight.normalizedTime = lean;
}
Tip: When using Additive animations, it is critical that you also play some other non-additive animation on every transform that is also used in the additive animation, otherwise the animations will add on top of the last frame's result. This is most certainly not what you want.
Animating Characters Procedurally
Sometimes you want to animate the bones of your character procedurally. For example, you might want the head of your character to look at a specific point in 3D space which is best handled by a script that tracks the target point. Fortunately, Unity makes this very easy, since bones are just Transforms which drive the skinned mesh. Thus, you can control the bones of a character from a script just like the Transforms of a GameObject.
One important thing to know is that the animation system updates Transforms after the Update() function and before the LateUpdate() function. Thus if you want to do a LookAt() function you should do that in LateUpdate() to make sure that you are really overriding the animation.
Ragdolls are created in the same way. You simply have to attach Rigidbodies, Character Joints and Capsule Colliders to the different bones. This will then physically animate your skinned character.
Animation Playback and Sampling
This section explains how animations in Unity are sampled when they are played back by the engine.
AnimationClips are typically authored at a fixed frame rate. For example, you may create your animation in 3ds Max or Maya at a frame rate of 60 frames per second (fps). When importing the animation in Unity, this frame rate will be read by the importer, so the data of the imported animation is also sampled at 60 fps.
However, games typically run at a variable frame rate. The frame rate may be higher on some computers than on others, and it may also vary from one second to the next based on the complexity of the view the camera is looking at at any given moment. Basically this means that we can make no assumptions about the exact frame rate the game is running at. What this means is that even if an animation is authored at 60 fps, it may be played back at a different framerate, such as 56.72 fps, or 83.14 fps, or practically any other value.
As a result, Unity must sample an animation at variable framerates, and cannot guarantee the framerate for which it was originally designed. Fortunately, animations for 3D computer graphics do not consist of discrete frames, but rather of continuous curves. These curves can be sampled at any point in time, not just at those points in time that correspond to frames in the original animation. In fact, if the game runs at a higher frame rate than the animation was authored with, the animation will actually look smoother and more fluid in the game than it did in the animation software.
For most practical purposes, you can ignore the fact that Unity samples animations at variable framerates. However, if you have gameplay logic that relies on animations that animate transforms or properties into very specific configurations, then you need to be aware that the re-sampling takes place behind the scenes. For example, if you have an animation that rotates an object from 0 to 180 degrees over 30 frames, and you want to know from your code when it has reached half way there, you should not do it by having a conditional statement in your code that checks if the current rotation is 90 degrees. Because Unity samples the animation according to the variable frame rate of the game, it may sample it when the rotation is just below 90 degrees, and the next time right after it reached 90 degrees. If you need to be notified when a specific point in an animation is reached, you should use an AnimationEvent instead.
Note also that as a consequence of the variable framerate sampling, an animation that is played back using WrapMode.Once may not be sampled at the exact time of the last frame. In one frame of the game the animation may be sampled just before the end of the animation, and in the next frame the time can have exceeded the length of the animation, so it is disabled and not sampled further. If you absolutely need the last frame of the animation to be sampled exactly, you should use WrapMode.ClampForever which will keep sampling the last frame indefinitely until you stop the animation yourself.
Page last updated: 2012-09-05Navmesh and Pathfinding
A navigation mesh (also known as the Navmesh) is a simplified representation of world geometry, which gameplay agents use to navigate the world. Typically an agent has a goal, or a destination, to which it is trying to find a path, and then navigate to that goal along the path. This process is called pathfinding. Note that Navmesh generation (or baking) is done by game developers inside the editor, while the pathfinding is done by agents at runtime based on that Navmesh.
In the complex world of games, there can be many agents, dynamic obstacles, and constantly changing accessibility levels for different areas in the world. Agents need to react dynamically to those changes. An agent's pathfinding task can be interrupted by or affected by things like collision avoidance with other characters, changing characteristics of the terrain, physical obstacles (such as closing doors), and an update to the actual destination.
Here is a simple example of how to set up a navmesh, and an agent that will do pathfinding on it:
- Create some geometry in the level, for example a Plane or a Terrain.
- In the Inspector Window's right hand corner click on and make sure that this geometry is marked up as
- Pull up the Navigation Mesh window (->).
- Bake the mesh. This will generate the navmesh for all navigation-static geometry.
- Create some dynamic geometry in the scene (such as characters).
- Set up an agent (or multiple agents), by adding a NavMeshAgent component to a dynamic geometry in the scene.
- Give the agent a destination (by setting the destination property) in a script attached to the agent.
- Press play and watch the magic.
Note that it is also possible to define custom NavMesh layers. These are needed for situations where some parts of the environment are easier for agents to pass through than others. For parts of the mesh that are not directly connected, it is possible to create Off Mesh Links.
Automatic off-mesh links
Navmesh geometry can also be marked up for automatic off-mesh link generation, like this:

Marking up geometry for automatic off-mesh link generation
Geometry marked up in this way will be checked during the Navmesh Baking process for creating links to other Navmesh geometry. This way, we can control the auto-generation for each GameObject. Whether an off-mesh link will be auto-generated in the baking process is also determined by the Jump distance and the Drop height properties in the settings.
The NavMeshLayer assigned to auto-generated off-mesh links, is the built-in layer Jump. This allows for global control of the auto-generated off-mesh links costs (see Navmesh layers).
Note, that there is also a possibility for setting up manual off-mesh links (described here).
Page last updated: 2012-04-24Navmesh Baking
Once the Navmesh geometry and layers are marked up, it's time to bake the Navmesh geometry.
Inside the Navigation window (), go to the tab (the upper-right corner), and click on the button (the lower-right corner).

Navigation Bake Window
Here are the properties that affect Navmesh baking:
| Radius | radius of the "typical" agent (preferrably the smallest). |
| Height | height of the "typical" agent (the "clearance" needed to get a character through). |
| Max Slope | all surfaces with higher slope than this, will be discarded. |
| Step height | the height difference below which navmesh regions are considered connected. |
| Drop height | If the value of this property is positive, off-mesh links will be placed for adjacent navmesh surfaces where the height difference is below this value. |
| Jump distance | If the value of this property is positive, off-mesh links will be placed for adjacent navmesh surfaces where the horizontal distance is below this value. |
| Advanced | |
| Min region area | Regions with areas below this threshold will be discarded. |
| Width inaccuracy % | Allowable width inaccuracy |
| Height inaccuracy % | Allowable height inaccuracy |
| Height mesh | If this options is on, original height information is stored. This has performance implications for speed and memory usage. |
Note that the baked navmesh is part of the scene and agents will be able to traverse it. To remove the navmesh, click on when you're in the tab.
(back to Navigation and Pathfinding)
Page last updated: 2012-04-24Sound
オーディオ リスナー
Audio Listener は、マイクのような機器として機能します。 これは、シーン内で所定の Audio Source 空の入力を受信し、コンピュータのスピーカーを通じて音声を再生します。 ほとんどのアプリケーションで、メインの Camera にリスナーを追加するのは最も有用です。 オーディオ リスナーが Reverb Zone の境界内にある場合、シーン内のすべての可聴音声に反響が適用されます。 (PRO のみ) さらに、Audio Effects をリスナーに適用でき、シーン内のすべての可聴音声に適用されます。

メイン カメラに追加されたオーディオ リスナー
プロパティ
オーディオ リスナーにはプロパティはありません。 作業するために追加する必要があるだけです。 デフォルトでは、常にメイン カメラに追加されます。
詳細
オーディオ リスナーは、Audio Sources と連携し、ゲームのための聴覚体験を作成できます。 オーディオ リスナーがシーン内で GameObject に追加されると、リスナーに十分近いソースが選択され、コンピュータのスピーカーに出力されます。 各シーンで適切に機能できるオーディオ リスナーは 1 つだけです 。
ソースが 3D の場合 (Audio Clip でのインポート設定を参照)、リスナーは、3D ワールドにおける音声の位置や、速度、方向をエミュレートします (Audio Source で、減衰レベルや 3D/2D の動作を微調整できます)。 2D は、3D 処理を無視します。 例えば、街を歩いているキャラクターがナイト クラブに入った場合、ナイト クラブの音楽はおそらく 2D で、クラブ内の個々のキャラクターの声は、Unity が扱う現実的な位置づけにより、モノになっているはずです。
オーディオ リスナーをメイン カメラまたはプレイヤーを表す GameObject のいずれかに追加する必要があります。 両方試して、ゲームに最適な方を見つけてください。
ヒント
- 各シーンで使用できるオーディオ リスナーは 1 つだけです。
- メニューから Audio Manager を使用して、プロジェクト全体のオーディオ設定をにアクセスできます。
- モノとステレオ音声の詳細については、Audio Clip コンポーネント ページを参照してください。
オーディオ ソース
Audio Source は、シーンで Audio Clip を再生します。 オーディオ クリップが 3D クリップの場合、ソースは、所定の位置で再生され、距離が離れると弱まります。 オーディオはスピーカー間で広がり (ステレオ - 7.1)(「スプレッド」)、3D と 2D 間で変わります (「パン レベル」)。 これは、フォールオフ曲線で長距離で制御できます。 また、listener が 1 時間以内または、複数の Reverb Zones である場合、反響がソースに適用されます。 (PRO のみ) より鮮やかなオーディオを得るのに、個々のフィルターを各オーディオ ソースに適用できます。 詳細については、Audio Effects を参照してください。

「Scene View でのオーディオ ソース機器と inspector でのその設定。」
!!プロパティ
| Audio Clip | 再生される音声クリップを参照します。 |
| Mute | 有効にすると、音声は再生されますが、ミュートになります。. |
| Bypass Effects | オーディオ ソースに適用されるフィルター効果を素早く「バイパス」します。 簡単にすべての効果をオン・オフできます。 |
| Play On Awake | 有効にすると、シーン開始時に音声の再生が始まります。 無効にすると、スクリプティングから「Play()」を使用して開始する必要があります。 |
| Loop | 有効にすると、最後に達した時に、「オーディオ クリップ」がループします。 |
| Priority | シーンにある他のすべてのオーディオ ソースの間でこのオーディオ ソースの優先度を決定します。 (優先度: 0 = 最重要、 256 = 最も重視しない、 デフォルト = 128)。 音楽トラックが時々スワップアウトするのを避けるには、0 を使用します。 |
| Volume | Audio Listener からの 1 つの世界単位 (1 メートル) ノ距離での音声の大きさ。 |
| Pitch | 「オーディオ クリップ」での減速 / 加速によるピッチの変化量。 1 が普通の再生速度になります。 |
| 3D Sound Settings | 3D 音声の場合にオーディオ ソースに適用される設定。 |
| Pan Level | 3D エンジンがオーディオ ソースに与える効果の程度を設定します。 |
| Spread | スピーカー空間内の 3D ステレオまたはマルチチャンネル音声に広がり角を設定します。 |
| Doppler Level | このオーディオ ソースに適用するドップラー効果の量を決定します (0 に設定すると、効果は適用されません)。 |
| Min Distance | MinDistance 内は、音声は最も低くなります。 MinDistance 外では、音声が弱まり始めます。 音声の MinDistance を増やすと、3D ワールドでの音声が「大きく」なります。減らすと、3D ワールドでの音声が「小さく」なります。 |
| Max Distance | 音が弱まるのを止める距離。 この地点を超えると、リスナーからの MaxDistance 単位で到達する音量を維持し、これ以上下がることはありません。 |
| Rolloff Mode | 音声がフェードする速度。 この値が高いほど、音声を聞く前にリスナーがより近づく必要があります (グラフによって決定されます)。 |
| Logarithmic Rolloff | オーディオ ソースに近づくと音声が大きくなりますが、オブジェクトから離れると、かなりの速さで低くなります。 |
| Linear Rolloff | オーディオ ソースから離れるほど、聞こえにくくなります。 |
| Custom Rolloff | オーディオ ソースからの音声が、ロール オフのグラフの設定状態に応じて動作します。 |
| 2D Sound Settings | 3D 音声の場合にオーディオ ソースに適用される設定。 |
| Pan 2D | 3D エンジンがオーディオ ソースに与える効果の程度を設定します。 |
Rolloff の種類
3つの Rolloff モードには、 対数、直線およびカスタム Rolloff があります。 カスタム Rolloff は、後述するように、音量距離曲線を修正することで修正できます。 対数または直線に設定する際に、音量距離関数を修正しようとすると、自動的にカスタム Rolloff.に変わります。

「オーディオ ソースが持つことができる Rolloff モード。」
距離関数
オーディオ ソースとオーディオ リスナー間の距離の関数として修正できる音声のプロパティがいくつかあります。
Volume: 距離に対する振幅 (0.0 - 1.0)。
Pan: 距離に対する左 (-1.0) 対右 (1.0)。
Spread: 距離に対する角度 (0.0 - 360.0°)。
Low-Pass (オーディオ ソースにローパスフィルタが追加されている場合のみ): 距離に対するカットオフ頻度 (22000.0 - 360.0°)。

「音量、パン、拡散、ローパス音声フィルタのための距離関数。 オーディオ リスナーまでの現在の距離にグラフで印が付けられます。」
距離関数を修正するには、曲線を直接編集します。 詳細については、Editing Curves を参照してください。
オーディオ ソースの作成
オーディオ ソースは、割り当てられた「オーディオ クリップ」がないと、何もしません。 クリップは、再生される実際の音声ファイルです。 ソースは、そのクリップの再生を開始・停止したり、その他のオーディオ プロパティを修正するためのコントローラのようなものです。
オーディオ ソースの新規作成
- Unity プロジェクトにオーディオ ファイルをインポートします。 これらがオーディオ クリップになります。
- メニューバーから に移動します。
- 新しい GameObject を選択して、 を選択します。
- インスペクタでオーディオ ソース コンポーネントの「オーディオ クリップ」プロパティを割り当てます。
注意: Assets フォルダにある 1 つの「オーディオ クリップ」に対してのみオーディオ ソースを作成したい場合、その「オーディオ クリップ」をシーン ビューにドラッグ & ドロップすると、そのシーン ビューに対して、「オーディオ ソース」ゲーム オブジェクトが自動的に作成されます。オーディオクリップをすでにあるゲームオブジェクトにドラッグ&ドロップすると、オーディオクリップをと新しいオーディオソースをアタッチする(すでにオーディオソースがなかった場合)ことになります。もしオブジェクトにオーディオソースがあった場合、ドラッグ&ドロップすることで、新しいオーディオクリップはすでにあるオーディオソースを上書きします。
プラットフォーム固有の詳細

iOS
携帯プラットフォーム上では、より高速に解凍するため、圧縮オーディオは MP3 として符号化されます。 この圧縮により、クリップの最後のサンプルが削除され、「完全ループ」のクリップを破壊する可能性があることに注意してください。 サンプルのクリッピングを避けるため、クリップが MP3 サンプルの境界にあることを確認してください (これを行うためのツールが広く利用可能です)。 パフォーマンス上の理由から、オーディオ クリップは、Apple ハードウェア コーデックを使用して再生できます。 これを有効にするには、インポート設定の「ハードウェアを使用する」チェックボックスにチェックを入れます。 詳細については、Audio Clip を参照してください。

Android
携帯プラットフォーム上では、より高速に解凍するため、圧縮オーディオは MP3 として符号化されます。 この圧縮により、クリップの最後のサンプルが削除され、「完全ループ」のクリップを破壊する可能性があることに注意してください。 サンプルのクリッピングを避けるため、クリップが MP3 サンプルの境界にあることを確認してください (これを行うためのツールが広く利用可能です)。
オーディオ クリップ
Audio Clip は、Audio Source によって使用されるオーディオ データです。 Unity は、モノ、ステレオおよびマルチ チャンネル (8 つまで) のオーディオ アセットをサポートしています。 Unity は、次のオーディオ ファイル形式をサポートしています。 .aif、.wav、.mp3、 .oggおよび次の トラッカー モジュール ファイル形式: .xm、.mod、.itおよび .s3m 。 トラッカー モジュール アセットは、波形プレビューをアセット インポート インスペクタにレンダリングできないこと以外は、Unity のその他のオーディオ アセットと同じ働きをします。

「オーディオ クリップ Inspector」
プロパティ
| Audio Format | ランタイム時に音声に使用される特定の形式。 |
| Native | ファイル サイズが大きくなるにつれ、品質が高くなります。 非常に短い音響効果に最適です。 |
| Compressed | ファイル サイズが小さくなるにつれ、品質が低くなるか、変わりやすくなります。 中程度の長さの音響効果や音楽に最適です。 |
| 3D Sound | 有効にすると、3D スペースで音声が再生されます。 モノとステレオの音声の両方を 3D で再生できます。 |
| Force to mono | 有効にすると、オーディオ クリップが 1 つのチャンネル音声にダウンミックスされます。 |
| Load Type | Unity がランタイムで音声をロードする方法。 |
| Decompress on load | ロード時に音声を解凍します。 オン ザ フライの解凍の性能オーバーヘッドを回避するため、より小さい圧縮音声に使用します。 ロード時の音声の解凍では、メモリ内で圧縮状態を維持する場合の 10 倍以上のメモリを使用するため、大きなファイルには使用しないでください。 |
| Compressed in memory | メモリ内で圧縮状態を維持し、再生時には解凍します。 若干の性能オーバーヘッドが生じるため (Ogg/Vorbis 圧縮ファイルの esp.)、大きいファイルにのみ使用してください。技術的な制約により、このオプションはFMODオーディオを使用するプラットフォーム上でOgg Vorbisについて”Steam From Disc”(下記参照)に切り換わることに注意してください。 |
| Stream from disc | ディスクから直接オーディオ データを流します。これは、メモリの元の音声サイズの一部を使用します。 音楽や非常に長いトラックに使用してください。 一般的に、ハードウェアに応じて、1 ~ 2 の同時ストリームに抑えてください。 |
| Compression | 「圧縮」クリップに適用される圧縮の量。 ファイル サイズに関する統計はスライダの下で確認できます。 スライダをドラッグして、再生を「十分良好」な状態にすべきですが、ファイルや配布上のニーズに見合うよう、十分小さいサイズにしてください。 |
| Hardware Decoding | (iOS のみ) iOS 機器上の圧縮オーディオに使用できます。 解凍時の CPU への負担を減らすため、Apple のハードウェア デコーダを使用します。 詳細については、プラットフォーム固有の詳細を確認してください。 |
| Gapless looping | (Android/iOS のみ) 完全ループのオーディオ ソース ファイル (非圧縮 PCM 形式) を圧縮する際に、そのループを残すために使用します。 標準の MPEG エンコーダは、ループ点周辺にサイレンスを取り込んでいますが、これはちょっとした「クリック」または「ポップ」として再生します。 Unity ではこれは円滑に扱われます。 |
オーディオ アセットのインポート
Unity は「圧縮」と「ネイティブ」オーディオの両方をサポートしています。 どのファイルも (MP3/Ogg Vorbis を除く) 最初は「ネイティブ」としてインポートされます。 ゲーム稼働中、圧縮オーディオ ファイルは CPU によって解凍される必要がありますが、ファイル サイズは小さくなります。 「Stream」にチェックを入れると、オーディオは「オン ザ フライ」で解凍されるか、そうでない場合は、オーディオはロード時に全体的に解凍されます。 ネイティブの PCM 形式 (WAV、AIFF) には CPU への負担を増やすことなく、高い忠実性があるという利点がありますが、作成されるファイルのサイズははるかに大きくなります。 モジュール ファイル (.mod、.it、.s3m..xm) は、極めて低いフットプリントで非常に高い音質を提供できます。
一般的に、「圧縮」オーディオ (またはモジュール) は、BGM や会話などの長いファイルに最適で、非圧縮オーディオは、短い音響効果により適しています。 高圧縮から始めて、圧縮スライダで圧縮の量を弱め、音質の差が著しくなる前後で適切に微調整します。
3D オーディオの使用
オーディオ クリップに「3D 音声」と表示されている場合、このクリップは、ゲームの世界の 3D スペースでの位置をシミュレートするために再生されます。 3D 音声は、音量を減らし、スピーカー間でパンすることで、音声の距離や位置をエミュレートします。 モノとマルチ チャンネルの音声の両方を 3D に配置できます。 マルチ チャンネル オーディオの場合、Audio Source の「Spread」オプションを使用して、スピーカー スペースで個々のチャンネルを拡散および分割します。 Unity は、3D スペースでのオーディオ の動作を制御および微調整するための各種オプションを提供しています。 Audio Source を参照してください。
プラットフォーム固有の詳細

iOS
携帯プラットフォーム上では、解凍時の CPU の負担を減らするため、圧縮オーディオは MP3 として符号化されます。
パフォーマンス上の理由から、オーディオ クリップは、Apple ハードウェア コーデックを使用して再生できます。 これを有効にするには、オーディオ インポータの「ハードウェア デコーディング」チェックボックスにチェックを入れます。 バックグラウンドの iPod オーディオを含む、ハードウェア オーディオ ストリームは 1 回につき、1 つしか回答できません。
ハードウェア デコーダを使用できない場合は、解凍はソフトウェア デコーダで行われます (iPhone 3GS 以降では、Apple のソフトウェア デコーダが Unity(FMOD) 自身のデコーダ上で使用されます)。

Android
携帯プラットフォーム上では、解凍時の CPU の負担を減らするため、圧縮オーディオは MP3 として符号化されます。
Game Interface Elements
Unity はいくつものゲーム制作のためのグラフィックユーザーインターフェース(GUI)を提供します。
GUI Text と GUI Texture オブジェクトを使用するか、UnityGUI を使ったスクリプトからインターフェースを生成することで利用出来ます。
このページの残りには、UnityGUI で起動および実行するための詳細なガイドが含まれています。
GUI スクリプティング ガイド
概要
UnityGUI により、機能を備えた多様な GUI を非常に素早く、簡単に作成できます。 GUI オブジェクトを作成し、手動で配置し、その機能を処理するスクリプトを記述する代わりに、このすべてを小さいコードで一度に行います。 これは、GUI Controlsのインスタンス化、配置、定義、の全てを一度の関数呼び出しで行います。
例えば、次のコードにより、エディタやその他で作業することなく、ボタンの作成とハンドリングが出来ます:
// JavaScript
function OnGUI () {
if (GUI.Button (Rect (10,10,150,100), "I am a button")) {
print ("You clicked the button!");
}
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
void OnGUI () {
if (GUI.Button (new Rect (10,10,150,100), "I am a button")) {
print ("You clicked the button!");
}
}
}

前述のコードで作成されたボタン
この例は非常に簡単ですが、UnityGUI で使用できる非常に強力で、複雑な手法があります。GUI構築は幅広いテーマですが、次の項で、できる限り早くスピードに乗れるお手伝いをします。本ガイドはそのまま読むか、参考資料として活用できます。
UnityGUI Basics
このセクションでは、UnityGUIの最も重要なコンセプトをカバーし、プロジェクトにコピー&ペーストできるいくつかのサンプルの他、概要を提供します。 UnityGUI は、非常に扱いやすいため、始めるにはこれは良い場所です。
Controls
このセクションでは、UnityGUI で使用できるすべてのコントロールを一覧表示します。 本項では、コードのサンプルと処理結果の画像を提供します。
Customization
GUIの外観をゲームの雰囲気にあわせて変更するのは大事なことです。UnityGUIでのコントロールはすべてGUIStylesとGUISkinsでカスタマイズ出来、このセクションでは、その使用方法について説明します。
Layout Modes
UnityGUIには、GUIを配置するための2つの方法があります。画面上で各コントロールを配置するか、HTMLテーブルと同じような働きをする自動レイアウトシステムを使用できます。どちらのシステムも好きに使用することが可能であり、2つを自由にミックスすることもできます。このセクションでは、例を含む、2つのシステム間の機能的差を説明します。
Extending UnityGUI
UnityGUIは、新しいコントロールのタイプで非常に簡単に拡張できます。 この章では、Unityのイベントシステムへの統合を備えた、簡単な「複合」コントロールを作成する方法を示します。
Extending Unity Editor
UnityエディタのGUIは、実際にUnityGUIを使用して記述されています。すなわち、ゲーム上のGUIに使用するコードと同等の方法で、エディタも完全に拡張できます。 また、カスタムのエディタ GUI作成時に便利な、エディタ固有のGUIウィジェットがいくつかあります。
Page last updated: 2012-11-13Networked Multiplayer
Realtime networking is a complex field but Unity makes it easy to add networking features to your game. Nevertheless, it is useful to have some idea of the scope of networking before using it in a game. This section explains the fundamentals of networking along with the specifics of Unity's implementation. If you have never created a network game before then it is strongly recommended that you work through this guide before getting started.
High Level Overview
This section outlines all the concepts involved in networking and serves as an introduction to deeper topics.
Networking Elements in Unity
This section of the guide covers Unity's implementation of the concepts explained in the overview.
RPC Details
Remote Procedure Call or RPC is a way of calling a function on a remote machine. This may be a client calling a function on the server, or the server calling a function on some or all clients. This section explains RPC concepts in detail.
State Synchronization
State Synchronization is a method of regularly updating a specific set of data across two or more game instances running on the network.
Minimizing Bandwidth
Every choice you make about where and how to share data will affect the network bandwidth your game uses. This page explains how bandwidth is used and how to keep usage to a minimum.
Network View
Network Views are Components you use to share data across the network and are a fundamental aspect of Unity networking. This page explains them in detail.
Network Instantiate
A complex subject in networking is ownership of an object and determination of who controls what. Network Instantiation handles this task for you, as explained in this section. Also covered are some more sophisticated alternatives for situations where you need more control over object ownership.
Master Server
The Master Server is like a game lobby where servers can advertise their presence to clients. It can also enable communication from behind a firewall or home network using a technique called NAT punchthrough (with help from a facilitator) to make sure your players can always connect with each other. This page explains how to use the Master Server.
Page last updated: 2011-11-18iphone-GettingStarted
Building games for devices like the iPhone and iPad requires a different approach than you would use for desktop PC games. Unlike the PC market, your target hardware is standardized and not as fast or powerful as a computer with a dedicated video card. Because of this, you will have to approach the development of your games for these platforms a little differently. Also, the features available in Unity for iOS differ slightly from those for desktop PCs.
Setting Up Your Apple Developer Account
Before you can run Unity iOS games on the actual device, you will need to have your Apple Developer account approved and set up. This includes establishing your team, adding your devices, and finalizing your provisioning profiles. All this setup is performed through Apple's developer website. Since this is a complex process, we have provided a basic outline of the tasks that must be completed before you can run code on your iOS devices. However, the best thing to do is follow the step-by-step instructions at Apple's iPhone Developer portal.
Note: We recommend that you set up your Apple Developer account before proceeding because you will need it to use Unity to its full potential with iOS.
Accessing iOS Functionality
Unity provides a number of scripting APIs to access the multi-touch screen, accelerometer, device geographical location system and much more. You can find out more about the script classes on the iOS scripting page.
Exposing Native C, C++ or Objective-C Code to Scripts
Unity allows you to call custom native functions written in C, C++ or Objective-C directly from C# scripts. To find out how to bind native functions, visit the plugins page.
Prepare Your Application for In-App Purchases
The Unity iOS runtime allows you to download new content and you can use this feature to implement in-app purchases. See the downloadable content manual page for further information.
Occlusion Culling
Unity supports occlusion culling which is useful for squeezing high performance out of complex scenes with many objects. See the occlusion culling manual page for further information.
Splash Screen Customization
See the splash screen customization page to find out how to change the image your game shows while launching.
Troubleshooting and Reporting Crashes.
If you are experiencing crashes on the iOS device, please consult the iOS troubleshooting page for a list of common issues and solutions. If you can't find a solution here then please file a bug report for the crash (menu: in the Unity editor).
How Unity's iOS and Desktop Targets Differ
Statically Typed JavaScript
Dynamic typing in JavaScript is always turned off in Unity when targetting iOS (this is equivalent to #pragma strict getting added to all your scripts automatically). Static typing greatly improves performance, which is especially important on iOS devices. When you switch an existing Unity project to the iOS target, you will get compiler errors if you are using dynamic typing. You can easily fix these either by using explicitly declared types for the variables that are causing errors or taking advantage of type inference.
MP3 Instead of Ogg Vorbis Audio Compression
For performance reasons, MP3 compression is favored on iOS devices. If your project contains audio files with Ogg Vorbis compression, they will be re-compressed to MP3 during the build. Consult the audio clip documentation for more information on using compressed audio on the iPhone.
PVRTC Instead of DXT Texture Compression
Unity iOS does not support DXT textures. Instead, PVRTC texture compression is natively supported by iPhone/iPad devices. Consult the texture import settings documentation to learn more about iOS texture formats.
Movie Playback
MovieTextures are not supported on iOS. Instead, full-screen streaming playback is provided via scripting functions. To learn about the supported file formats and scripting API, consult the movie page in the manual.
Further Reading
- Unity iOS Basics
- Unity Remote
- iOS Scripting
- iOS Hardware Guide
- iOS のパフォーマンスの最適化
- Account Setup
- Unity iOS が現在サポートしていない機能
- Building Plugins for iOS
- Preparing your application for "In App Purchases"
- Customizing the Splash screen of Your Mobile Application
- Trouble Shooting
- Reporting crash bugs on iOS
iphone-basic
This section covers the most common and important questions that come up when starting to work with iOS.
Prerequisites
I've just received iPhone Developer approval from Apple, but I've never developed for iOS before. What do I do first?
A: Download the SDK, get up and running on the Apple developer site, and set up your team, devices, and provisioning. We've provided a basic list of steps to get you started.
Can Unity-built games run in the iPhone Simulator?
A: No, but Unity iOS can build to iPad Simulator if you're using the latest SDK. However the simulator itself is not very useful for Unity because it does not simulate all inputs from iOS or properly emulate the performance you get on the iPhone/iPad. You should test out gameplay directly inside Unity using the iPhone/iPad as a remote control while it is running the Unity Remote application. Then, when you are ready to test performance and optimize the game, you publish to iOS devices.
Unity Features
How do I work with the touch screen and accelerometer?
A: In the scripting reference inside your Unity iOS installation, you will find classes that provide the hooks into the device's functionality that you will need to build your apps. Consult the Input System page for more information.
My existing particle systems seem to run very slowly on iOS. What should I do?
A: iOS has relatively low fillrate. If your particles cover a rather large portion of the screen with multiple layers, it will kill iOS performance even with the simplest shader. We suggest baking your particle effects into a series of textures off-line. Then, at run-time, you can use 1-2 particles to display them via animated textures. You can get fairly decent looking effects with a minimum amount of overdraw this way.
Can I make a game that uses heavy physics?
A: Physics can be expensive on iOS is it requires a lot of floating point number crunching. You should completely avoid MeshColliders if at all possible, but they can be used if they are really necessary. To improve performance, use a low fixed framerate using . A framerate of 10-30 is recommended. Enable rigidbody interpolation to achieve smooth motion while using low physics frame rates. In order to achieve completely fluid framerate without oscillations, it is best to pick fixed deltaTime value based on the average framerate your game is getting on iOS. Either 1:1 or half the frame rate is recommended. For example, if you get 30 fps, you should use 15 or 30 fps for fixed frame rate (0.033 or 0.066)
Can I access the gallery, music library or the native iPod player in Unity iOS?
A: Yes - if you implement it. Unity iPhone supports the native plugin system, where you can add any feature you need -- including access to Gallery, Music library, iPod Player and any other feature that the iOS SDK exposes. Unity iOS does not provide an API for accessing the listed features through Unity scripts.
UnityGUI Considerations
What kind of performance impact will UnityGUI make on my games?
A: UnityGUI is fairly expensive when many controls are used. It is ideal to limit your use of UnityGUI to game menus or very minimal GUI Controls while your game is running. It is important to note that every object with a script containing an OnGUI() call will require additional processor time -- even if it is an empty OnGUI() block. It is best to disable any scripts that have an OnGUI() call if the GUI Controls are not being used. You can do this by marking the script as enabled = false.
Any other tips for using UnityGUI?
A: Try using GUILayout as little as possible. If you are not using GUILayout at all from one OnGUI() call, you can disable all GUILayout rendering using MonoBehaviour.useGUILayout = false; This doubles GUI rendering performance. Finally, use as few GUI elements while rendering 3D scenes as possible.
unity-remote
Unity Remote is an application that allows you to use your iOS device as a remote control for your project in Unity. This is useful during development since it is much quicker to test your project in the editor with remote control than to build and deploy it to the device after each change.
Where can I find Unity Remote?
Unity remote is available for download from the AppStore at no charge. If you prefer to build and deploy the application yourself, you can download the source here at the Unity website.
How do I build Unity Remote?
First, download the project source code here and unzip it to your preferred location. The zip file contains an XCode project to build Unity Remote and install it on your device.
Assuming you have already created the provisioning profile and successfully installed iOS builds on your device, you just need to open the Xcode project file UnityRemote.xcodeproj. Once XCode is launched, you should click "Build and Go" to install the app on your iOS device. If you have never built and run applications before, we recommend that you try building some of the Apple examples first to familiarize yourself with XCode and iOS.
Once Unity Remote is installed, make sure your device is connected via Wi-Fi to the same network as your development machine. Launch Unity Remote on your iPhone/iPad while Unity is running on your computer and select your computer from the list that appears. Now, whenever you enter Play mode in the Editor, your device will act as a remote control that you can use for developing and testing your game. You can control the application with the device wirelessly and you will also see a low-res version of the app on the device's screen.
Note: The Unity iOS editor cannot emulate the device's hardware perfectly, so you may not get the exact behavior (graphics performance, touch responsiveness, sounds playback, etc) that you would on a real device.
Xcode shows strange errors while deploying Unity Remote to my device. What should I do?
This indicates that the default Identifier in the Unity Remote project is not compatible with your provisioning profile. You will have to alter this Identifier manually in your XCode project. The Identifier must match your provisioning profile.
You will need to create an AppID with an trailing asterisk if you have not already done so; you can do this in the Program Portal on Apple's iPhone Developer Program. First, go to the Program Portal and choose the AppIDs tab. Then, click the Add ID button in the top right corner and type your usual bundle identifier followed by dot and asterisk (eg, com.mycompany.*) in the App ID Bundle Seed ID and Bundle Identifier field. Add the new AppID to your provisioning profile, then download and reinstall it. Don't forget to restart Xcode afterwards. If you have any problems creating the AppID, consult the Provisioning How-to section on Apple's website.

Don't forget to change the Identifier before you install Unity Remote on your device.
Open the Unity Remote project with XCode. From the menu, select . This will open a new window entitled Target "Unity Remote" Info. Select the Properties tab. Change the Identifier property field from com.unity3d.UnityRemote to the bundle identifier in your AppID followed by "." (dot) followed by "UnityRemote". For example, if your provisioning profile contains ##.com.mycompany.* AppID, then change the Identifier field to com.mycompany.UnityRemote.
Next, select from the menu, and compile and install Unity Remote again. You may also need to change the active SDK from Simulator to Device - 2.0 | Release. There is no problem using SDK 2.0 even if your device runs a newer version of the OS.
I'm getting really poor graphics quality when running my game in Unity Remote. What can I do to improve it?
When you use Unity Remote, the game actually runs on your Mac while its visual content is heavily compressed and streamed to the device. As a result, what you see on the device screen is just a low-res version of what the app would really look like. You should check how the game runs on the device occasionally by building and deploying the app (select in the Unity editor).
Unity Remote is laggy. Can I improve it?
The performance of Unity Remote depends heavily on the speed of the Wi-Fi network, the quality of the networking hardware and other factors. For the best experience, create an ad-hoc network between your Mac and iOS device. Click the Airport icon on your Mac and choose "Create Network". Then, enter a name and password and click OK. On the device, choose Settings->Wi-Fi and select the new Wi-Fi network you have just created. Remember that an ad-hoc network is really a wireless connection that does not involve a wireless access point. Therefore, you will usually not have internet access while using ad-hoc networking.
Turning Bluetooth off on both on your iPhone/iPad and on Mac should also improve connection quality.
If you do not need to see the game view on the device, you can turn image synchronization off in the Remote machine list. This will reduce the network traffic needed for the Remote to work.
The connection to Unity Remote is easily lost
This can be due to a problem with the installation or other factors that prevent Unity Remote from functioning properly. Try the following steps in sequence, checking if the performance improves at each step before moving on to the next:-
- First of all, check if Bluetooth is switched on. Both your Mac and iOS device should have Bluetooth disabled for best performance.
- Delete the settings file located at ~/Library/Preferences/com.unity3d.UnityEditoriPhone.plist
- Reinstall the game on your iPhone/iPad.
- Reinstall Unity on your Mac.
- As a last resort, performing a hard reset on the iOS device can sometimes improve the performance of Unity Remote.
If you still experience problems then try installing Unity Remote on another device (in another location if possible) and see if it gives you better results. There could be problems with RF interference or other software influencing the performance of the wireless adapter on your Mac or iOS device.
Unity Remote doesn't see my Mac. What should I do?
- Check if Unity Remote and your Mac are connected to the same wireless network.
- Check your firewall settings, router security settings, and any other hardware/software that may filter packets on your network.
- Leave Unity Remote running, switch off your Mac's Airport for a minute or two, and switch on again.
- Restart both Unity and Unity Remote. Sometimes you also need to cold-restart your iPhone/iPad (hold down the menu and power buttons simultaneously).
- Unity Remote uses the Apple Bonjour service, so check that your Mac has it switched on.
- Reinstall Unity Remote from the latest Unity iOS package.
iphone-API
Most features of the iOS devices are exposed through the Input and Handheld classes. For cross-platform projects, UNITY_IPHONE is defined for conditionally compiling iOS-specific C# code.
Further Reading
Page last updated: 2012-11-26iphone-Input

Desktop
Note: Keyboard, joystick and gamepad input work on the desktop versions of Unity (including webplayer and Flash) but not on mobiles.
Unity supports keyboard, joystick and gamepad input.
Virtual axes and buttons can be created in the Input Manager, and end users can configure Keyboard input in a nice screen configuration dialog.

You can setup joysticks, gamepads, keyboard, and mouse, then access them all through one simple scripting interface.
From scripts, all virtual axes are accessed by their name.
Every project has the following default input axes when it's created:
- Horizontal and Vertical are mapped to w, a, s, d and the arrow keys.
- Fire1, Fire2, Fire3 are mapped to Control, Option (Alt), and Command, respectively.
- Mouse X and Mouse Y are mapped to the delta of mouse movement.
- Window Shake X and Window Shake Y is mapped to the movement of the window.
Adding new Input Axes
If you want to add new virtual axes go to the menu. Here you can also change the settings of each axis.

You map each axis to two buttons on a joystick, mouse, or keyboard keys.
| Name | The name of the string used to check this axis from a script. |
| Descriptive Name | Positive value name displayed in the input tab of the dialog for standalone builds. |
| Descriptive Negative Name | Negative value name displayed in the Input tab of the dialog for standalone builds. |
| Negative Button | The button used to push the axis in the negative direction. |
| Positive Button | The button used to push the axis in the positive direction. |
| Alt Negative Button | Alternative button used to push the axis in the negative direction. |
| Alt Positive Button | Alternative button used to push the axis in the positive direction. |
| Gravity | Speed in units per second that the axis falls toward neutral when no buttons are pressed. |
| Dead | Size of the analog dead zone. All analog device values within this range result map to neutral. |
| Sensitivity | Speed in units per second that the the axis will move toward the target value. This is for digital devices only. |
| Snap | If enabled, the axis value will reset to zero when pressing a button of the opposite direction. |
| Invert | If enabled, the Negative Buttons provide a positive value, and vice-versa. |
| Type | The type of inputs that will control this axis. |
| Axis | The axis of a connected device that will control this axis. |
| Joy Num | The connected Joystick that will control this axis. |
Use these settings to fine tune the look and feel of input. They are all documented with tooltips in the Editor as well.
Using Input Axes from Scripts
You can query the current state from a script like this:
value = Input.GetAxis ("Horizontal");
An axis has a value between -1 and 1. The neutral position is 0. This is the case for joystick input and keyboard input.
However, Mouse Delta and Window Shake Delta are how much the mouse or window moved during the last frame. This means it can be larger than 1 or smaller than -1 when the user moves the mouse quickly.
It is possible to create multiple axes with the same name. When getting the input axis, the axis with the largest absolute value will be returned. This makes it possible to assign more than one input device to one axis name. For example, create one axis for keyboard input and one axis for joystick input with the same name. If the user is using the joystick, input will come from the joystick, otherwise input will come from the keyboard. This way you don't have to consider where the input comes from when writing scripts.
Button Names
To map a key to an axis, you have to enter the key's name in the Positive Button or Negative Button property in the Inspector.
The names of keys follow this convention:
- Normal keys: "a", "b", "c" ...
- Number keys: "1", "2", "3", ...
- Arrow keys: "up", "down", "left", "right"
- Keypad keys: "[1]", "[2]", "[3]", "[+]", "[equals]"
- Modifier keys: "right shift", "left shift", "right ctrl", "left ctrl", "right alt", "left alt", "right cmd", "left cmd"
- Mouse Buttons: "mouse 0", "mouse 1", "mouse 2", ...
- Joystick Buttons (from any joystick): "joystick button 0", "joystick button 1", "joystick button 2", ...
- Joystick Buttons (from a specific joystick): "joystick 1 button 0", "joystick 1 button 1", "joystick 2 button 0", ...
- Special keys: "backspace", "tab", "return", "escape", "space", "delete", "enter", "insert", "home", "end", "page up", "page down"
- Function keys: "f1", "f2", "f3", ...
The names used to identify the keys are the same in the scripting interface and the Inspector.
value = Input.GetKey ("a");
Mobile Input
On iOS and Android, the Input class offers access to touchscreen, accelerometer and geographical/location input.
Access to keyboard on mobile devices is provided via the iOS keyboard.
Multi-Touch Screen
The iPhone and iPod Touch devices are capable of tracking up to five fingers touching the screen simultaneously. You can retrieve the status of each finger touching the screen during the last frame by accessing the Input.touches property array.
Android devices don't have a unified limit on how many fingers they track. Instead, it varies from device to device and can be anything from two-touch on older devices to five fingers on some newer devices.
Each finger touch is represented by an Input.Touch data structure:
| fingerId | The unique index for a touch. |
| position | The screen position of the touch. |
| deltaPosition | The screen position change since the last frame. |
| deltaTime | Amount of time that has passed since the last state change. |
| tapCount | The iPhone/iPad screen is able to distinguish quick finger taps by the user. This counter will let you know how many times the user has tapped the screen without moving a finger to the sides. Android devices do not count number of taps, this field is always 1. |
| phase | Describes so called "phase" or the state of the touch. It can help you determine if the touch just began, if user moved the finger or if he just lifted the finger. |
Phase can be one of the following:
| Began | A finger just touched the screen. |
| Moved | A finger moved on the screen. |
| Stationary | A finger is touching the screen but hasn't moved since the last frame. |
| Ended | A finger was lifted from the screen. This is the final phase of a touch. |
| Canceled | The system cancelled tracking for the touch, as when (for example) the user puts the device to her face or more than five touches happened simultaneously. This is the final phase of a touch. |
Following is an example script which will shoot a ray whenever the user taps on the screen:
var particle : GameObject;
function Update () {
for (var touch : Touch in Input.touches) {
if (touch.phase == TouchPhase.Began) {
// Construct a ray from the current touch coordinates
var ray = Camera.main.ScreenPointToRay (touch.position);
if (Physics.Raycast (ray)) {
// Create a particle if hit
Instantiate (particle, transform.position, transform.rotation);
}
}
}
}
Mouse Simulation
On top of native touch support Unity iOS/Android provides a mouse simulation. You can use mouse functionality from the standard Input class.
Device Orientation
Unity iOS/Android allows you to get discrete description of the device physical orientation in three-dimensional space. Detecting a change in orientation can be useful if you want to create game behaviors depending on how the user is holding the device.
You can retrieve device orientation by accessing the Input.deviceOrientation property. Orientation can be one of the following:
| Unknown | The orientation of the device cannot be determined. For example when device is rotate diagonally. |
| Portrait | The device is in portrait mode, with the device held upright and the home button at the bottom. |
| PortraitUpsideDown | The device is in portrait mode but upside down, with the device held upright and the home button at the top. |
| LandscapeLeft | The device is in landscape mode, with the device held upright and the home button on the right side. |
| LandscapeRight | The device is in landscape mode, with the device held upright and the home button on the left side. |
| FaceUp | The device is held parallel to the ground with the screen facing upwards. |
| FaceDown | The device is held parallel to the ground with the screen facing downwards. |
Accelerometer
As the mobile device moves, a built-in accelerometer reports linear acceleration changes along the three primary axes in three-dimensional space. Acceleration along each axis is reported directly by the hardware as G-force values. A value of 1.0 represents a load of about +1g along a given axis while a value of -1.0 represents -1g. If you hold the device upright (with the home button at the bottom) in front of you, the X axis is positive along the right, the Y axis is positive directly up, and the Z axis is positive pointing toward you.
You can retrieve the accelerometer value by accessing the Input.acceleration property.
The following is an example script which will move an object using the accelerometer:
var speed = 10.0;
function Update () {
var dir : Vector3 = Vector3.zero;
// we assume that the device is held parallel to the ground
// and the Home button is in the right hand
// remap the device acceleration axis to game coordinates:
// 1) XY plane of the device is mapped onto XZ plane
// 2) rotated 90 degrees around Y axis
dir.x = -Input.acceleration.y;
dir.z = Input.acceleration.x;
// clamp acceleration vector to the unit sphere
if (dir.sqrMagnitude > 1)
dir.Normalize();
// Make it move 10 meters per second instead of 10 meters per frame...
dir *= Time.deltaTime;
// Move object
transform.Translate (dir * speed);
}
Low-Pass Filter
Accelerometer readings can be jerky and noisy. Applying low-pass filtering on the signal allows you to smooth it and get rid of high frequency noise.
The following script shows you how to apply low-pass filtering to accelerometer readings:
var AccelerometerUpdateInterval : float = 1.0 / 60.0;
var LowPassKernelWidthInSeconds : float = 1.0;
private var LowPassFilterFactor : float = AccelerometerUpdateInterval / LowPassKernelWidthInSeconds; // tweakable
private var lowPassValue : Vector3 = Vector3.zero;
function Start () {
lowPassValue = Input.acceleration;
}
function LowPassFilterAccelerometer() : Vector3 {
lowPassValue = Mathf.Lerp(lowPassValue, Input.acceleration, LowPassFilterFactor);
return lowPassValue;
}
The greater the value of LowPassKernelWidthInSeconds, the slower the filtered value will converge towards the current input sample (and vice versa). You should be able to use the LowPassFilter() function instead of avgSamples().
I'd like as much precision as possible when reading the accelerometer. What should I do?
Reading the Input.acceleration variable does not equal sampling the hardware. Put simply, Unity samples the hardware at a frequency of 60Hz and stores the result into the variable. In reality, things are a little bit more complicated -- accelerometer sampling doesn't occur at consistent time intervals, if under significant CPU loads. As a result, the system might report 2 samples during one frame, then 1 sample during the next frame.
You can access all measurements executed by accelerometer during the frame. The following code will illustrate a simple average of all the accelerometer events that were collected within the last frame:
var period : float = 0.0;
var acc : Vector3 = Vector3.zero;
for (var evnt : iPhoneAccelerationEvent in iPhoneInput.accelerationEvents) {
acc += evnt.acceleration * evnt.deltaTime;
period += evnt.deltaTime;
}
if (period > 0)
acc *= 1.0/period;
return acc;
Further Reading
The Unity mobile input API is originally based on Apple's API. It may help to learn more about the native API to better understand Unity's Input API. You can find the Apple input API documentation here:
- Programming Guide: Event Handling (Apple iPhone SDK documentation)
- UITouch Class Reference (Apple iOS SDK documentation)
Note: The above links reference your locally installed iPhone SDK Reference Documentation and will contain native ObjectiveC code. It is not necessary to understand these documents for using Unity on mobile devices, but may be helpful to some!

iOS
Device geographical location
Device geographical location can be obtained via the iPhoneInput.lastLocation property. Before calling this property you should start location service updates using iPhoneSettings.StartLocationServiceUpdates() and check the service status via iPhoneSettings.locationServiceStatus. See the scripting reference for details.
iOS-Keyboard
In most cases, Unity will handle keyboard input automatically for GUI elements but it is also easy to show the keyboard on demand from a script.

iOS
Using the Keyboard
GUI Elements
The keyboard will appear automatically when a user taps on editable GUI elements. Currently, GUI.TextField, GUI.TextArea and GUI.PasswordField will display the keyboard; see the GUI class documentation for further details.
Manual Keyboard Handling
Use the iPhoneKeyboard.Open function to open the keyboard. Please see the iPhoneKeyboard scripting reference for the parameters that this function takes.
Keyboard Type Summary
The Keyboard supports the following types:
| iPhoneKeyboardType.Default | Letters. Can be switched to keyboard with numbers and punctuation. |
| iPhoneKeyboardType.ASCIICapable | Letters. Can be switched to keyboard with numbers and punctuation. |
| iPhoneKeyboardType.NumbersAndPunctuation | Numbers and punctuation. Can be switched to keyboard with letters. |
| iPhoneKeyboardType.URL | Letters with slash and .com buttons. Can be switched to keyboard with numbers and punctuation. |
| iPhoneKeyboardType.NumberPad | Only numbers from 0 to 9. |
| iPhoneKeyboardType.PhonePad | Keyboard used to enter phone numbers. |
| iPhoneKeyboardType.NamePhonePad | Letters. Can be switched to phone keyboard. |
| iPhoneKeyboardType.EmailAddress | Letters with @ sign. Can be switched to keyboard with numbers and punctuation. |
Text Preview
By default, an edit box will be created and placed on top of the keyboard after it appears. This works as preview of the text that user is typing, so the text is always visible for the user. However, you can disable text preview by setting iPhoneKeyboard.hideInput to true. Note that this works only for certain keyboard types and input modes. For example, it will not work for phone keypads and multi-line text input. In such cases, the edit box will always appear. iPhoneKeyboard.hideInput is a global variable and will affect all keyboards.
Keyboard Orientation
By default, the keyboard automatically follows the device orientation. To disable or enable rotation to a certain orientation, use the following properties available in iPhoneKeyboard:
| autorotateToPortrait | Enable or disable autorotation to portrait orientation (button at the bottom). |
| autorotateToPortraitUpsideDown | Enable or disable autorotation to portrait orientation (button at top). |
| autorotateToLandscapeLeft | Enable or disable autorotation to landscape left orientation (button on the right). |
| autorotateToLandscapeRight | Enable or disable autorotation to landscape right orientation (button on the left). |
Visibility and Keyboard Size
There are three keyboard properties in iPhoneKeyboard that determine keyboard visibility status and size on the screen.
| visible | Returns true if the keyboard is fully visible on the screen and can be used to enter characters. |
| area | Returns the position and dimensions of the keyboard. |
| active | Returns true if the keyboard is activated. This property is not static property. You must have a keyboard instance to use this property. |
Note that iPhoneKeyboard.area will return a rect with position and size set to 0 until the keyboard is fully visible on the screen. You should not query this value immediately after iPhoneKeyboard.Open. The sequence of keyboard events is as follows:
- iPhoneKeyboard.Open is called. iPhoneKeyboard.active returns true. iPhoneKeyboard.visible returns false. iPhoneKeyboard.area returns (0, 0, 0, 0).
- Keyboard slides out into the screen. All properties remain the same.
- Keyboard stops sliding. iPhoneKeyboard.active returns true. iPhoneKeyboard.visible returns true. iPhoneKeyboard.area returns real position and size of the keyboard.
Secure Text Input
It is possible to configure the keyboard to hide symbols when typing. This is useful when users are required to enter sensitive information (such as passwords). To manually open keyboard with secure text input enabled, use the following code:
iPhoneKeyboard.Open("", iPhoneKeyboardType.Default, false, false, true);

Hiding text while typing
Alert keyboard
To display the keyboard with a black semi-transparent background instead of the classic opaque, call iPhoneKeyboard.Open as follows:
iPhoneKeyboard.Open("", iPhoneKeyboardType.Default, false, false, true, true);

Classic keyboard

Alert keyboard

Android
Unity Android reuses the iOS API to display system keyboard. Even though Unity Android supports most of the functionality of its iPhone counterpart, there are two aspects which are not supported:
- iPhoneKeyboard.hideInput
- iPhoneKeyboard.area
Please also note that the layout of a iPhoneKeyboardType can differ somewhat between devices.
iOS-Advanced

iOS
Advanced iOS scripting
Determining Device Generation
Different device generations support different functionality and have widely varying performance. You should query the device's generation and decide which functionality should be disabled to compensate for slower devices.
You can find the device generation from the iPhone.generation property. The reported generation can be one of the following:
- iPhone
- iPhone3G
- iPhone3GS
- iPhone4
- iPodTouch1Gen
- iPodTouch2Gen
- iPodTouch3Gen
- iPodTouch4Gen
- iPad1Gen
You can find more information about different device generations, performance and supported functionality in our iPhone Hardware Guide.
Device Properties
There are a number of device-specific properties that you can access:-
| SystemInfo.deviceUniqueIdentifier | Unique device identifier. |
| SystemInfo.deviceName | User specified name for device. |
| SystemInfo.deviceModel | Is it iPhone or iPod Touch? |
| SystemInfo.operatingSystem | Operating system name and version. |
Anti-Piracy Check
Pirates will often hack an application from the AppStore (by removing Apple DRM protection) and then redistribute it for free. Unity iOS comes with an anti-piracy check which allows you to determine if your application was altered after it was submitted to the AppStore.
You can check if your application is genuine (not-hacked) with the Application.genuine property. If this property returns false then you might notify the user that he is using a hacked application or maybe disable access to some functions of your application.
Note: accessing the Application.genuine property is a fairly expensive operation and so you shouldn't do it during frame updates or other time-critical code.
Vibration Support
You can trigger a vibration by calling Handheld.Vibrate. Note that iPod Touch devices lack vibration hardware and will just ignore this call.

Android
Advanced Android scripting
Determining Device Generation
Different Android devices support different functionality and have widely varying performance. You should target specific devices or device families and decide which functionality should be disabled to compensate for slower devices. There are a number of device specific properties that you can access to which device is being used.
Note: Android Marketplace does some additional compatibility filtering, so you should not be concerned if an ARMv7-only app optimised for OGLES2 is offered to some old slow devices.
Device Properties
| SystemInfo.deviceUniqueIdentifier | Unique device identifier. |
| SystemInfo.deviceName | User specified name for device. |
| SystemInfo.deviceModel | Is it iPhone or iPod Touch? |
| SystemInfo.operatingSystem | Operating system name and version. |
Anti-Piracy Check
Pirates will often hack an application (by removing Apple DRM protection) and then redistribute it for free. Unity Android comes with an anti-piracy check which allows you to determine if your application was altered after it was submitted to the AppStore.
You can check if your application is genuine (not-hacked) with the Application.genuine property. If this property returns false then you might notify user that he is using a hacked application or maybe disable access to some functions of your application.
Note: Application.genuineCheckAvailable should be used along with Application.genuine to verify that application integrity can actually be confirmed. Accessing the Application.genuine property is a fairly expensive operation and so you shouldn't do it during frame updates or other time-critical code.
Vibration Support
You can trigger a vibration by calling Handheld.Vibrate. However, devices lacking vibration hardware will just ignore this call.
iOS-DotNet

iOS
Now Unity iOS supports two .NET API compatibility levels: .NET 2.0 and a subset of .NET 2.0 .You can select the appropriate level in the Player Settings.
.NET API 2.0
Unity supports the .NET 2.0 API profile. This is close to the full .NET 2.0 API and offers the best compatibility with pre-existing .NET code. However, the application's build size and startup time will be relatively poor.
Note: Unity iOS does not support namespaces in scripts. If you have a third party library supplied as source code then the best approach is to compile it to a DLL outside Unity and then drop the DLL file into your project's Assets folder.
.NET 2.0 Subset
Unity also supports the .NET 2.0 Subset API profile. This is close to the Mono "monotouch" profile, so many limitations of the "monotouch" profile also apply to Unity's .NET 2.0 Subset profile. More information on the limitations of the "monotouch" profile can be found here. The advantage of using this profile is reduced build size (and startup time) but this comes at the expense of compatibility with existing .NET code.

Android
Unity Android supports two .NET API compatibility levels: .NET 2.0 and a subset of .NET 2.0 You can select the appropriate level in the Player Settings.
.NET API 2.0
Unity supports the .NET 2.0 API profile; It is close to the full .NET 2.0 API and offers the best compatibility with pre-existing .NET code. However, the application's build size and startup time will be relatively poor.
Note: Unity Android does not support namespaces in scripts. If you have a third party library supplied as source code then the best approach is to compile it to a DLL outside Unity and then drop the DLL file into your project's Assets folder.
.NET 2.0 Subset
Unity also supports the .NET 2.0 Subset API profile. This is close to the Mono "monotouch" profile, so many limitations of the "monotouch" profile also apply to Unity's .NET 2.0 Subset profile. More information on the limitations of the "monotouch" profile can be found here. The advantage of using this profile is reduced build size (and startup time) but this comes at the expense of compatibility with existing .NET code.
iphone-Hardware
Hardware models
The following table summarizes iOS hardware available in devices of various generations:
iPhone ModelsOriginal iPhone
![]() Fixed-function graphics (no fancy shaders), very slow CPU and GPU.
iPhone 3G
|
iPhone 3GS
![]() Shader-capable hardware, per-pixel-lighting (bumpmaps) can only be on small portions of the screen at once. Requires scripting optimization for complex games. This is the average hardware of the app market as of July 2012
iPhone 4
![]() The iPhone 4S, with the new A5 chip, is capable of rendering complex shaders throughout the entire screen. Even image effects may be possible. However, optimizing your shaders is still crucial. But if your game isn't trying to push limits of the device, optimizing scripting and gameplay is probably as much of a waste of time on this generation of devices as it is on PC. iPhone 4S
iPod Touch Models |
![]() Fixed-function graphics (no fancy shaders), very slow CPU and GPU. iPod Touch 1st generation
iPod Touch 2nd generation
|
![]() Shader-capable hardware, per-pixel-lighting (bumpmaps) can only be on small portions of the screen at once. Requires scripting optimization for complex games. This is the average hardware of the app market as of July 2012 iPod Touch 3rd generation
iPod Touch 4th generation
|
iPad Models![]() Similar to iPod Touch 4th Generation and iPhone 4. iPad
|
![]() The A5 can do full screen bumpmapping, assuming the shader is simple enough. However, it is likely that your game will perform best with bumpmapping only on crucial objects. Full screen image effects still out of reach. Scripting optimization less important. iPad 2
|
![]() The iPad 3 has been shown to be capable of render-to-texture effects such as reflective water and fullscreen image effects. However, optimized shaders are still crucial. But if your game isn't trying to push limits of the device, optimizing scripting and gameplay is probably as much of a waste of time on this generation of devices as it is on PC. iPad 3
|
Graphics Processing Unit and Hidden Surface Removal
The iPhone/iPad graphics processing unit (GPU) is a Tile-Based Deferred Renderer. In contrast with most GPUs in desktop computers, the iPhone/iPad GPU focuses on minimizing the work required to render an image as early as possible in the processing of a scene. That way, only the visible pixels will consume processing resources.
The GPU's frame buffer is divided up into tiles and rendering happens tile by tile. First, triangles for the whole frame are gathered and assigned to the tiles. Then, visible fragments of each triangle are chosen. Finally, the selected triangle fragments are passed to the rasterizer (triangle fragments occluded from the camera are rejected at this stage).
In other words, the iPhone/iPad GPU implements a Hidden Surface Removal operation at reduced cost. Such an architecture consumes less memory bandwidth, has lower power consumption and utilizes the texture cache better. Tile-Based Deferred Rendering allows the device to reject occluded fragments before actual rasterization, which helps to keep overdraw low.
For more information see also:-
- POWERVR MBX Technology Overview
- Apple Notes on iPhone/iPad GPU and OpenGL ES
- Apple Performance Advices for OpenGL ES in General
- Apple Performance Advices for OpenGL ES Shaders
MBX series
Older devices such as the original iPhone, iPhone 3G and iPod Touch 1st and 2nd Generation are equipped with the MBX series of GPUs. The MBX series supports only OpenGL ES1.1, the fixed function Transform/Lighting pipeline and two textures per fragment.
SGX series
Starting with the iPhone 3GS, newer devices are equipped with the SGX series of GPUs. The SGX series features support for the OpenGL ES2.0 rendering API and vertex and pixel shaders. The Fixed-function pipeline is not supported natively on such GPUs, but instead is emulated by generating vertex and pixel shaders with analogous functionality on the fly.
The SGX series fully supports MultiSample anti-aliasing.
Texture Compression
The only texture compression format supported by iOS is PVRTC. PVRTC provides support for RGB and RGBA (color information plus an alpha channel) texture formats and can compress a single pixel to two or four bits.
The PVRTC format is essential to reduce the memory footprint and to reduce consumption of memory bandwidth (ie, the rate at which data can be read from memory, which is usually very limited on mobile devices).
Vertex Processing Unit
The iPhone/iPad has a dedicated unit responsible for vertex processing which runs calculations in parallel with rasterization. In order to achieve better parallelization, the iPhone/iPad processes vertices one frame ahead of the rasterizer.
Unified Memory Architecture
Both the CPU and GPU on the iPhone/iPad share the same memory. The advantage is that you don't need to worry about running out of video memory for your textures (unless, of course, you run out of main memory too). The disadvantage is that you share the same memory bandwidth for gameplay and graphics. The more memory bandwidth you dedicate to graphics, the less you will have for gameplay and physics.
Multimedia CoProcessing Unit
The iPhone/iPad main CPU is equipped with a powerful SIMD (Single Instruction, Multiple Data) coprocessor supporting either the VFP or the NEON architecture. The Unity iOS run-time takes advantage of these units for multiple tasks such as calculating skinned mesh transformations, geometry batching, audio processing and other calculation-intensive operations.
Page last updated: 2012-08-20iphone-performance
ここではiOSデバイスに特化した最適化に関して記述されます。より詳細のモバイルでバイスの情報は Practical Guide to Optimization for Mobiles をご覧ください。
- iOS Specific Optimizations
- Measuring Performance with the Built-in Profiler
- Optimizing the Size of the Built iOS Player
iphone-iOS-Optimization
This page details optimizations which are unique to iOS deployment. For more information on optimizing for mobile devices, see the Practical Guide to Optimization for Mobiles.
Script Call Optimization
Most of the functions in the UnityEngine namespace are implemented in C/C++. Calling a C/C++ function from a Mono script involves a performance overhead. You can use iOS Script Call optimization (menu: ) to save about 1 to 4 milliseconds per frame. The options for this setting are:-
- Slow and Safe - the default Mono internal call handling with exception support.
- Fast and Exceptions Unsupported - a faster implementation of Mono internal call handling. However, this doesn't support exceptions and so should be used with caution. An app that doesn't explicitly handle exceptions (and doesn't need to deal with them gracefully) is an ideal candidate for this option.
Setting the Desired Framerate
Unity iOS allows you to change the frequency with which your application will try to execute its rendering loop, which is set to 30 frames per second by default. You can lower this number to save battery power but of course this saving will come at the expense of frame updates. Conversely, you can increase the framerate to give the rendering priority over other activities such as touch input and accelerometer processing. You will need to experiment with your choice of framerate to determine how it affects gameplay in your case.
If your application involves heavy computation or rendering and can maintain only 15 frames per second, say, then setting the desired frame rate higher than fifteen wouldn't give any extra performance. The application has to be optimized sufficiently to allow for a higher framerate.
To set the desired framerate, open the XCode project generated by Unity and open the AppController.mm file. The line
#define kFPS 30
...determines the the current framerate, so you can just change to set the desired value. For example, if you change the define to:-
#define kFPS 60
...then the application will attempt to render at 60 FPS instead of 30 FPS.
The Rendering Loop
When iOS version 3.1 or later is in use, Unity will use the CADisplayLink class to schedule the rendering loop. Versions before 3.1 need to use one of several fallback methods to handle the loop. However, the fallback methods can be activated even for iOS 3.1 and later by changing the line
#define USE_DISPLAY_LINK_IF_AVAILABLE 1
...and changing it to
#define USE_DISPLAY_LINK_IF_AVAILABLE 0
Fallback Loop Types
Apple recommends the system timer for scheduling the rendering operation on iOS versions before 3.1. This approach is good for applications where performance is not critical and favours battery life and correct processing of events over rendering performance. However, better rendering performance is often more important to games, so Unity provides several scheduling methods to tweak the performance of the rendering loop:-
- System Timer: this is the standard approach suggested by Apple. It uses the NSTimer class to schedule rendering and has the worst rendering performance but guarantees to process all input events.
- Thread: a separate thread is used to schedule rendering. This offers better rendering performance than the NSTimer approach, but sometimes could miss touch or accelerometer events. This method of scheduling is also the easiest to set up and is the default method used by Unity for iOS versions before 3.1.
- Event Pump: this uses a CFRunLoop object to dispatch events. It gives better rendering performance than the NSTimer approach and also allows you to set the amount of time the OS should spend processing touch and accelerometer events. This option must be used with care since touch and accelerometer events will be lost if there is not enough processor time available to handle them.
The different fallback loop types can be selected by changing defines in the AppController.mm file. The significant lines are the following:-
#define FALLBACK_LOOP_TYPE NSTIMER_BASED_LOOP #define FALLBACK_LOOP_TYPE THREAD_BASED_LOOP #define FALLBACK_LOOP_TYPE EVENT_PUMP_BASED_LOOP
The file should have all but one of these lines commented out. The uncommented line selects the rendering loop method that will be used by the application.
If you want to prioritize rendering over input processing with the NSTimer approach you should locate and change the line
#define kThrottleFPS 2.0
...in AppController.mm. Increasing this number will give higher priority to rendering. The result of changing this value varies among applications, so it is best to try it for yourself and see what happens in your specific case.
If you use the Event Pump rendering loop then you need to tweak the kMillisecondsPerFrameToProcessEvents constant precisely to achieve the desired responsiveness. The kMillisecondsPerFrameToProcessEvents constant allows you to specify exactly how much time (in milliseconds) you will allow the OS to process events. If you allocate insufficient time for this task then touch or accelerometer events might be lost, and while the application will be fast, it will also be less responsive.
To specify the amount of time (in milliseconds) that the OS will spend processing events, locate and change the line
#define kMillisecondsPerFrameToProcessEvents 7.0
...in AppController.mm.
Tuning Accelerometer Processing Frequency
If accelerometer input is processed too frequently then the overall performance of your game may suffer as a result. By default, a Unity iOS application will sample the accelerometer 60 times per second. You may see some performance benefit by reducing the accelerometer sampling frequency and it can even be set to zero for games that don't use accelerometer input. You can change the accelerometer frequency from the Other Settings panel in the iOS Player Settings.
Page last updated: 2012-07-30iphone-InternalProfiler

iOS
On iOS, it's disabled by default so to enable it, you need to open the Unity-generated XCode project, select the iPhone_Profiler.h file and change the line
#define ENABLE_INTERNAL_PROFILER 0
to
#define ENABLE_INTERNAL_PROFILER 1
Select in the XCode menu to display the output console (GDB) and then run your project. Unity will output statistics to the console window every thirty frames.

Android
On Android, it is enabled by default. Just make sure Development Build is checked in the player settings when building, and the statistics should show up in logcat when run on the device. To view logcat, you need adb or the Android Debug Bridge. Once you have that, simply run the shell command adb logcat.
Here's an example of the built-in profiler's output.
iPhone/iPad Unity internal profiler stats: cpu-player> min: 9.8 max: 24.0 avg: 16.3 cpu-ogles-drv> min: 1.8 max: 8.2 avg: 4.3 cpu-waits-gpu> min: 0.8 max: 1.2 avg: 0.9 cpu-present> min: 1.2 max: 3.9 avg: 1.6 frametime> min: 31.9 max: 37.8 avg: 34.1 draw-call #> min: 4 max: 9 avg: 6 | batched: 10 tris #> min: 3590 max: 4561 avg: 3871 | batched: 3572 verts #> min: 1940 max: 2487 avg: 2104 | batched: 1900 player-detail> physx: 1.2 animation: 1.2 culling: 0.5 skinning: 0.0 batching: 0.2 render: 12.0 fixed-update-count: 1 .. 2 mono-scripts> update: 0.5 fixedUpdate: 0.0 coroutines: 0.0 mono-memory> used heap: 233472 allocated heap: 548864 max number of collections: 1 collection total duration: 5.7
All times are measured in milliseconds per frame. You can see the minimum, maximum and average times over the last thirty frames.
General CPU Activity
| cpu-player | Displays the time your game spends executing code inside the Unity engine and executing scripts on the CPU. |
| cpu-ogles-drv | Displays the time spent executing OpenGL ES driver code on the CPU. Many factors like the number of draw calls, number of internal rendering state changes, the rendering pipeline setup and even the number of processed vertices can have an effect on the driver stats. |
| cpu-waits-gpu | Displays the time the CPU is idle while waiting for the GPU to finish rendering. If this number exceeds 2-3 milliseconds then your application is most probably fillrate/GPU processing bound. If this value is too small then the profile skips displaying the value. |
| msaa-resolve | The time taken to apply anti-aliasiing. |
| cpu-present | The amount of time spent executing the presentRenderbuffer command in OpenGL ES. |
| frametime | Represents the overall time of a game frame. Note that iOS hardware is always locked at a 60Hz refresh rate, so you will always get multiples times of ~16.7ms (1000ms/60Hz = ~16.7ms). |
Rendering Statistics
| draw-call # | The number of draw calls per frame. Keep it as low as possible. |
| tris # | Total number of triangles sent for rendering. |
| verts # | Total number of vertices sent for rendering. You should keep this number below 10000 if you use only static geometry but if you have lots of skinned geometry then you should keep it much lower. |
| batched | Number of draw-calls, triangles and vertices which were automatically batched by the engine. Comparing these numbers with draw-call and triangle totals will give you an idea how well is your scene prepared for batching. Share as many materials as possible among your objects to improve batching. |
Detailed Unity Player Statistics
The player-detail section provides a detailed breakdown of what is happening inside the engine:-
| physx | Time spent on physics. |
| animation | Time spent animating bones. |
| culling | Time spent culling objects outside the camera frustum. |
| skinning | Time spent applying animations to skinned meshes. |
| batching | Time spent batching geometry. Batching dynamic geometry is considerably more expensive than batching static geometry. |
| render | Time spent rendering visible objects. |
| fixed-update-count | Minimum and maximum number of FixedUpdates executed during this frame. Too many FixedUpdates will deteriorate performance considerably. There are some simple guidelines to set a good value for the fixed time delta here. |
Detailed Scripts Statistics
The mono-scripts section provides a detailed breakdown of the time spent executing code in the Mono runtime:
| update | Total time spent executing all Update() functions in scripts. |
| fixedUpdate | Total time spent executing all FixedUpdate() functions in scripts. |
| coroutines | Time spent inside script coroutines. |
Detailed Statistics on Memory Allocated by Scripts
The mono-memory section gives you an idea of how memory is being managed by the Mono garbage collector:
| allocated heap | Total amount of memory available for allocations. A garbage collection will be triggered if there is not enough memory left in the heap for a given allocation. If there is still not enough free memory even after the collection then the allocated heap will grow in size. |
| used heap | The portion of the allocated heap which is currently used up by objects. Every time you create a new class instance (not a struct) this number will grow until the next garbage collection. |
| max number of collections | Number of garbage collection passes during the last 30 frames. |
| collection total duration | Total time (in milliseconds) of all garbage collection passes that have happened during the last 30 frames. |
iphone-playerSizeOptimization
The two main ways of reducing the size of the player are by changing the Active Build Configuration within Xcode and by changing the Stripping Level within Unity.
Building in Release Mode
You can choose between the Debug and Release options on the Active Build Configuration drop-down menu in Xcode. Building as Release instead of Debug can reduce the size of the built player by as much as 2-3MB, depending on the game.

The Active Build Configuration drop-down
In Release mode, the player will be built without any debug information, so if your game crashes or has other problems there will be no stack trace information available for output. This is fine for deploying a finished game but you will probably want to use Debug mode during development.
iOS Stripping Level (Advanced License feature)
The size optimizations activated by stripping work in the following way:-
- Strip assemblies level: the scripts' bytecode is analyzed so that classes and methods that are not referenced from the scripts can be removed from the DLLs and thereby excluded from the AOT compilation phase. This optimization reduces the size of the main binary and accompanying DLLs and is safe as long as no reflection is used.
- Strip ByteCode level: any .NET DLLs (stored in the Data folder) are stripped down to metadata only. This is possible because all the code is already precompiled during the AOT phase and linked into the main binary.
- Use micro mscorlib level: a special, smaller version of mscorlib is used. Some components are removed from this library, for example, Security, Reflection.Emit, Remoting, non Gregorian calendars, etc. Also, interdependencies between internal components are minimized. This optimization reduces the main binary and mscorlib.dll size but it is not compatible with some System and System.Xml assembly classes, so use it with care.
These levels are cumulative, so level 3 optimization implicitly includes levels 2 and 1, while level 2 optimization includes level 1.
Note: Micro mscorlib is a heavily stripped-down version of the core library. Only those items that are required by the Mono runtime in Unity remain. Best practice for using micro mscorlib is not to use any classes or other features of .NET that are not required by your application. GUIDs are a good example of something you could omit; they can easily be replaced with custom made pseudo GUIDs and doing this would result in better performance and app size.
Tips
How to Deal with Stripping when Using Reflection
Stripping depends highly on static code analysis and sometimes this can't be done effectively, especially when dynamic features like reflection are used. In such cases, it is necessary to give some hints as to which classes shouldn't be touched. Unity supports a per-project custom stripping blacklist. Using the blacklist is a simple matter of creating a link.xml file and placing it into the Assets folder. An example of the contents of the link.xml file follows. Classes marked for preservation will not be affected by stripping:-
<linker>
<assembly fullname="System.Web.Services">
<type fullname="System.Web.Services.Protocols.SoapTypeStubInfo" preserve="all"/>
<type fullname="System.Web.Services.Configuration.WebServicesConfigurationSectionHandler" preserve="all"/>
</assembly>
<assembly fullname="System">
<type fullname="System.Net.Configuration.WebRequestModuleHandler" preserve="all"/>
<type fullname="System.Net.HttpRequestCreator" preserve="all"/>
<type fullname="System.Net.FileWebRequestCreator" preserve="all"/>
</assembly>
</linker>
Note: it can sometimes be difficult to determine which classes are getting stripped in error even though the application requires them. You can often get useful information about this by running the stripped application on the simulator and checking the Xcode console for error messages.
Simple Checklist for Making Your Distribution as Small as Possible
- Minimize your assets: enable PVRTC compression for textures and reduce their resolution as far as possible. Also, minimize the number of uncompressed sounds. There are some additional tips for file size reduction here.
- Set the iOS Stripping Level to Use micro mscorlib.
- Set the script call optimization level to Fast but no exceptions.
- Don't use anything that lives in System.dll or System.Xml.dll in your code. These libraries are not compatible with micro mscorlib.
- Remove unnecessary code dependencies.
- Set the API Compatibility Level to .Net 2.0 subset. Note that .Net 2.0 subset has limited compatibility with other libraries.
- Set the Target Platform to armv6 (OpenGL ES1.1).
- Don't use JS Arrays.
- Avoid generic containers in combination with value types, including structs.
Can I produce apps of less than 20 megabytes with Unity?
Yes. An empty project would take about 13 MB in the AppStore if all the size optimizations were turned off. This gives you a budget of about 7MB for compressed assets in your game. If you own an Advanced License (and therefore have access to the stripping option), the empty scene with just the main camera can be reduced to about 6 MB in the AppStore (zipped and DRM attached) and you will have about 14 MB available for compressed assets.
Why did my app increase in size after being released to the AppStore?
When they publish your app, Apple first encrypt the binary file and then compresses it via zip. Most often Apple's DRM increases the binary size by about 4 MB or so. As a general rule, you should expect the final size to be approximately equal to the size of the zip-compressed archive of all files (except the executable) plus the size of the uncompressed executable file.
Page last updated: 2011-11-08iphone-accountsetup
There are some steps you must follow before you can build and run any code (including Unity-built games) on your iOs device. These steps are prerequisite to publishing your own iOS games.
1. Apply to Apple to Become a Registered iPhone/iPad Developer
You do this through Apple's website: http://developer.apple.com/iphone/program/
2. Upgrade your Operating System and iTunes Installation
Please note that these are Apple's requirements as part of using the iPhone SDK, but the requirements can change from time to time.
3. Download the iPhone SDK
Download the latest iOS SDK from the iOS dev center and install it. Do not download the beta version of the SDK - you should use only the latest shipping version. Note that downloading and installing the iPhone SDK will also install XCode.
4. Get Your Device Identifier
Connect your iOS device to the Mac with the USB cable and launch XCode. XCode will detect your phone as a new device and you should register it with the "Use For Development" button. This will usually open the Organizer window but if it doesn't then go to Window->Organizer. You should see your iOS device) in the devices list on the left; select it and note your device's identifier code (which is about 40 characters long).
5. Add Your Device
Log in to the iPhone developer center and enter the program portal (button on the right). Go to the Devices page via the link on left side and then click the Add Device button on the right. Enter a name for your device (alphanumeric characters only) and your device's identifier code (noted in step 5 above). Click the Submit button when done.
6. Create a Certificate
From the iPhone Developer Program Portal, click the Certificates link on the left side and follow the instructions listed under How-To...
7. Download and Install the WWDR Intermediate Certificate
The download link is in the same "Certificates" section (just above the "Important Notice" rubric) as WWDR Intermediate Certificate. Once downloaded, double-click the certificate file to install it.
8. Create a Provisioning File
Provisioning profiles are a bit complex, and need to be set up according to the way you have organized your team. It is difficult to give general instructions for provisioning, so we recommend that you look at the Provisioning How-to section on the Apple Developer website.
Page last updated: 2011-11-08iphone-unsupported
グラフィックス
- DXT テクスチャ圧縮はサポートされていません。代わりに、PVRTC 形式を使用します。 詳細については、Texture2D Component page を参照してください。
- 正方形テクスチャは、PVRTC 形式に圧縮できません。
- ムービー テクスチャはサポートされていませんので、代わりにフルスクリーンのストリーミング再生を使用します。 詳細については、Movie playback page を参照してください。
- Open GL ES2.0 は、iPhone、iPhone 3G、iPod Touch 1st および iPod Touch 2nd Generation ハードウェアではサポートされていません。
オーディオ
- Ogg オーディオ圧縮はサポートされていません。 Ogg オーディオは、 エディタで iOS プラットフォームに切り替えると、MP3 に自動的に変換されます。 Unity iOS でのオーディオ サポートの詳細については、AudioClip Component page を参照してください。
スクリプティング
- OnMouseDown、OnMouseEnter、OnMouseOver、OnMouseExit、OnMouseDown、OnMouseUp、およびOnMouseDragイベントはサポートされていません。
- ダック タイピングなどの動的機能はサポートされていません。 コンパイラに動的機能をエラーとして報告させる場合は、スクリプトに
#pragma strictを使用します。 - WWWクラスを介したビデオ ストリーミングはサポートされていません。
- WWWクラスによる FTP サポートは制限されています。
Unity iOS Advanced License に制限された機能
- スタティック バッチングは、Unity iOS Advancedでのみサポートされています。
- ビデオ再生は、Unity iOS Advancedでのみサポートされています。
- スプラッシュ画面カスタマイズは、Unity iOS Advancedでのみサポートされています。
- AssetBundles は、Unity iOS Advancedでのみサポートされています。
- コード スクリプティングは、Unity iOS Advancedでのみサポートされています。
- .NET ソケットは、Unity iOS Advancedでのみサポートされています。
注意: .NET CIL コードの 1MB は、ARM コードの約 3〜4MB に変換されるので、外部ライブラリへの参照は最小限にすることをお勧めします。 例えば、アプリケーションが System.dll と System.Xml.dll を参照する場合、ストリッピングが使用されていない場合、さらに ARM コードが 6MB 必要になります。 ある点では、リンカがコードのリンクに問題がある場合、アプリケーションが制限に達します。 アプリケーションのサイズを重視する場合、JavaScript に比べ依存性が低いため、C# の方がコードには適しているかもしれません。
Page last updated: 2012-11-13iphone-Plugins
This page describes Native Code Plugins for the iOS platform.
Building an Application with a Native Plugin for iOS
- Define your extern method in the C# file as follows:
[DllImport ("__Internal")] private static extern float FooPluginFunction (); - Set the editor to the iOS build target
- Add your native code source files to the generated XCode project's "Classes" folder (this folder is not overwritten when the project is updated, but don't forget to backup your native code).
If you are using C++ (.cpp) or Objective-C (.mm) to implement the plugin you must ensure the functions are declared with C linkage to avoid name mangling issues.
extern "C" {
float FooPluginFunction ();
}
Using Your Plugin from C#
iOS native plugins can be called only when deployed on the actual device, so it is recommended to wrap all native code methods with an additional C# code layer. This code should check Application.platform and call native methods only when the app is running on the device; dummy values can be returned when the app runs in the Editor. See the Bonjour browser sample application for an example.
Calling C# / JavaScript back from native code
Unity iOS supports limited native-to-managed callback functionality via UnitySendMessage:UnitySendMessage("GameObjectName1", "MethodName1", "Message to send");
This function has three parameters : the name of the target GameObject, the script method to call on that object and the message string to pass to the called method.
Known limitations:
- Only script methods that correspond to the following signature can be called from native code:
function MethodName(message:string) - Calls to UnitySendMessage are asynchronous and have a delay of one frame.
Automated plugin integration
Unity iOS supports automated plugin integration in a limited way. All files with extensions .a,.m,.mm,.c,.cpp located in the Assets/Plugins/iOS folder will be merged into the generated Xcode project automatically. However, merging is done by symlinking files from Assets/Plugins/iOS to the final destination, which might affect some workflows. The .h files are not included in the Xcode project tree, but they appear on the destination file system, thus allowing compilation of .m/.mm/.c/.cpp files.
Note: subfolders are currently not supported.
iOS Tips
- Managed-to-unmanaged calls are quite processor intensive on iOS. Try to avoid calling multiple native methods per frame.
- As mentioned above, wrap your native methods with an additional C# layer that calls native code on the device and returns dummy values in the Editor.
- String values returned from a native method should be UTF-8 encoded and allocated on the heap. Mono marshaling calls are free for strings like this.
- As mentioned above, the XCode project's "Classes" folder is a good place to store your native code because it is not overwritten when the project is updated.
- Another good place for storing native code is the Assets folder or one of its subfolders. Just add references from the XCode project to the native code files: right click on the "Classes" subfolder and choose "Add->Existing files...".
Examples
Bonjour Browser Sample
A simple example of the use of a native code plugin can be found here
This sample demonstrates how objective-C code can be invoked from a Unity iOS application. This application implements a very simple Bonjour client. The application consists of a Unity iOS project (Plugins/Bonjour.cs is the C# interface to the native code, while BonjourTest.js is the JS script that implements the application logic) and native code (Assets/Code) that should be added to the built XCode project.
Page last updated: 2011-11-01iphone-Downloadable-Content
This chapter does not aim to cover how to integrate your game with Apple's "StoreKit" API. It is assumed that you already have integration with "StoreKit" via a native code plugin.
Apple's "StoreKit" documentation defines four kinds of Products that could be sold via the "In App Purchase" process:
- Content
- Functionality
- Services
- Subscriptions
This chapter covers the first case only and focuses mainly on the downloadable content concept. AssetBundles are ideal candidates for use as downloadable content, and two scenarios will be covered:
- How to export asset bundles for use on iOS
- How download and cache them on iOS
Exporting your assets for use on iOS
Having separate projects for downloadable content can be a good idea, allowing better separation between content that comes with your main application and content that is downloaded later.
Please note: Any game scripts included in downloadable content must also be present in the main executable.
- Create an Editor folder inside the Project View.
- Create an ExportBundle.js script there and place the following code inside:
@MenuItem ("Assets/Build AssetBundle From Selection - Track dependencies") static function ExportBundle(){ var str : String = EditorUtility.SaveFilePanel("Save Bundle...", Application.dataPath, Selection.activeObject.name, "assetbundle"); if (str.Length != 0){ BuildPipeline.BuildAssetBundle(Selection.activeObject, Selection.objects, str, BuildAssetBundleOptions.CompleteAssets, BuildTarget.iPhone); } } - Design your objects that need to be downloadable as prefabs
- Select a prefab that needs to be exported and mouse right click

If the first two steps were done properly, then the Build AssetBundle From Selection - Track dependencies context menu item should be visible. - Select it if you want to include everything that this asset uses.
- A save dialog will be shown, enter the desired asset bundle file name. An .assetbundle extension will be added automatically. The Unity iOS runtime accepts only asset bundles built with the same version of the Unity editor as the final application. Read BuildPipeline.BuildAssetBundle for details.
Downloading your assets on iOS
- Asset bundles can be downloaded and loaded by using the WWW class and instantiating a main asset. Code sample:
var download : WWW; var url = "http://somehost/somepath/someassetbundle.assetbundle"; download = new WWW (url); yield download; assetBundle = download.assetBundle; if (assetBundle != null) { // Alternatively you can also load an asset by name (assetBundle.Load("my asset name")) var go : Object = assetBundle.mainAsset; if (go != null) instanced = Instantiate(go); else Debug.Log("Couldnt load resource"); } else { Debug.Log("Couldnt load resource"); } - You can save required files to a Documents folder next to your game's Data folder.
public static string GetiPhoneDocumentsPath () { // Your game has read+write access to /var/mobile/Applications/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/Documents // Application.dataPath returns // /var/mobile/Applications/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/myappname.app/Data // Strip "/Data" from path string path = Application.dataPath.Substring (0, Application.dataPath.Length - 5); // Strip application name path = path.Substring(0, path.LastIndexOf('/')); return path + "/Documents"; } - Cache a downloaded asset bundle using the .NET file API and for reuse it in the future by loading it via WWW class and file:///pathtoyourapplication/Documents/savedassetbundle.assetbundle. Sample code for caching:
// Code designed for caching on iPhone, cachedAssetBundle path must be different when running in Editor // See code snippet above for getting the path to your Documents folder private var cachedAssetBundle : String = "path to your Documents folder" + "/savedassetbundle.assetbundle"; var cache = new System.IO.FileStream(cachedAssetBundle, System.IO.FileMode.Create); cache.Write(download.bytes, 0, download.bytes.Length); cache.Close(); Debug.Log("Cache saved: " + cachedAssetBundle);
MobileCustomizeSplashScreen

iOS
Under iOS Basic, a default splash screen will be displayed while your game loads, oriented according to the Default Screen Orientation option in the Player Settings.
Users with an iOS Pro license can use any texture in the project as a splash screen. The size of the texture depends on the target device (320x480 pixels for 1-3rd gen devices, 1024x768 for iPad, 640x960 for 4th gen devices) and supplied textures will be scaled to fit if necessary. You can set the splash screen textures using the iOS Player Settings.

Android
Under Android Basic, a default splash screen will be displayed while your game loads, oriented according to the Default Screen Orientation option in the Player Settings.
Android Pro users can use any texture in the project as a splash screen. You can set the texture from the Splash Image section of the Android Player Settings. You should also select the Splash scaling method from the following options:-
- Center (only scale down) will draw your image at its natural size unless it is too large, in which case it will be scaled down to fit.
- Scale to fit (letter-boxed) will draw your image so that the longer dimension fits the screen size exactly. Empty space around the sides in the shorter dimension will be filled in black.
- Scale to fill (cropped) will scale your image so that the shorter dimension fits the screen size exactly. The image will be cropped in the longer dimension.
iphone-troubleshooting
This section addresses common problems that can arise when using Unity. Each platform is dealt with separately below.

Desktop
In MonoDevelop, the Debug button is greyed out!
- This means that MonoDevelop was unable to find the Unity executable. In the MonoDevelop preferences, go to the Unity/Debugger section and then browse to where your Unity executable is located.
Is there a way to get rid of the welcome page in MonoDevelop?
- Yes. In the MonoDevelop preferences, go to the Visual Style section, and uncheck "Load welcome page on startup".
Geforce 7300GT on OSX 10.6.4
- Deferred rendering is disabled because materials are not displayed correctly for Geforce 7300GT on OX 10.6.4; This happens because of buggy video drivers.
On Windows x64, Unity crashes when my script throws a NullReferenceException
- Please apply Windows Hotfix #976038.
Graphics
Slow framerate and/or visual artifacts.
- This may occur if your video card drivers are not up to date. Make sure you have the latest official drivers from your card vendor.
Shadows
I see no shadows at all!
- Shadows are a Unity Pro only feature, so without Unity Pro you won't get shadows. Simpler shadow methods, like using a Projector, are still possible, of course.
- Shadows also require certain graphics hardware support. See Shadows page for details.
- Check if shadows are not completely disabled in Quality Settings.
- Shadows are currently not supported for Android and iOS mobile platforms.
Some of my objects do not cast or receive shadows
An object's Renderer must have Receive Shadows enabled for shadows to be rendered onto it. Also, an object must have Cast Shadows enabled in order to cast shadows on other objects (both are on by default).
Only opaque objects cast and receive shadows. This means that objects using the built-in Transparent or Particle shaders will not cast shadows. In most cases it is possible to use Transparent Cutout shaders for objects like fences, vegetation, etc. If you use custom written Shaders, they have to be pixel-lit and use the Geometry render queue. Objects using VertexLit shaders do not receive shadows but are able to cast them.
Only Pixel lights cast shadows. If you want to make sure that a light always casts shadows no matter how many other lights are in the scene, then you can set it to Force Pixel render mode (see the Light reference page).

iOS
Troubleshooting on iOS devices
There are some situations with iOS where your game can work perfectly in the Unity editor but then doesn't work or maybe doesn't even start on the actual device. The problems are often related to code or content quality. This section describes the most common scenarios.
The game stops responding after a while. Xcode shows "interrupted" in the status bar.
There are a number of reasons why this may happen. Typical causes include:
- Scripting errors such as using uninitialized variables, etc.
- Using 3rd party Thumb compiled native libraries. Such libraries trigger a known problem in the iOS SDK linker and might cause random crashes.
- Using generic types with value types as parameters (eg, List<int>, List<SomeStruct>, List<SomeEnum>, etc) for serializable script properties.
- Using reflection when managed code stripping is enabled.
- Errors in the native plugin interface (the managed code method signature does not match the native code function signature).
Information from the XCode Debugger console can often help detect these problems (Xcode menu: View > Debug Area > Activate Console).
The Xcode console shows "Program received signal: “SIGBUS” or EXC_BAD_ACCESS error.
This message typically appears on iOS devices when your application receives a NullReferenceException. There two ways to figure out where the fault happened:
Managed stack traces
Since version 3.4 Unity includes software-based handling of the NullReferenceException. The AOT compiler includes quick checks for null references each time a method or variable is accessed on an object. This feature affects script performance which is why it is enabled only for development builds (for basic license users it is enough to enable the "development build" option in the Build Settings dialog, while iOS pro license users additionally need to enable the "script debugging" option). If everything was done right and the fault actually is occurring in .NET code then you won't see EXC_BAD_ACCESS anymore. Instead, the .NET exception text will be printed in the Xcode console (or else your code will just handle it in a "catch" statement). Typical output might be:
Unhandled Exception: System.NullReferenceException: A null value was found where an object instance was required. at DayController+$handleTimeOfDay$121+$.MoveNext () [0x0035a] in DayController.js:122
This indicates that the fault happened in the handleTimeOfDay method of the DayController class, which works as a coroutine. Also if it is script code then you will generally be told the exact line number (eg, "DayController.js:122"). The offending line might be something like the following:
Instantiate(_imgwww.assetBundle.mainAsset);
This might happen if, say, the script accesses an asset bundle without first checking that it was downloaded correctly.
Native stack traces
Native stack traces are a much more powerful tool for fault investigation but using them requires some expertise. Also, you generally can't continue after these native (hardware memory access) faults happen. To get a native stack trace, type bt all into the Xcode Debugger Console. Carefully inspect the printed stack traces - they may contain hints about where the error occurred. You might see something like:
... Thread 1 (thread 11523): #0 0x006267d0 in m_OptionsMenu_Start () #1 0x002e4160 in wrapper_runtime_invoke_object_runtime_invoke_void__this___object_intptr_intptr_intptr () #2 0x00a1dd64 in mono_jit_runtime_invoke (method=0x18b63bc, obj=0x5d10cb0, params=0x0, exc=0x2fffdd34) at /Users/mantasp/work/unity/unity-mono/External/Mono/mono/mono/mini/mini.c:4487 #3 0x0088481c in MonoBehaviour::InvokeMethodOrCoroutineChecked () ...
First of all you should find the stack trace for "Thread 1", which is the main thread. The very first lines of the stack trace will point to the place where the error occurred. In this example, the trace indicates that the NullReferenceException happened inside the "OptionsMenu" script's "Start" method. Looking carefully at this method implementation would reveal the cause of the problem. Typically, NullReferenceExceptions happen inside the Start method when incorrect assumptions are made about initialization order. In some cases only a partial stack trace is seen on the Debugger Console:
Thread 1 (thread 11523): #0 0x0062564c in start ()
This indicates that native symbols were stripped during the Release build of the application. The full stack trace can be obtained with the following procedure:
- Remove application from device.
- Clean all targets.
- Build and run.
- Get stack traces again as described above.
EXC_BAD_ACCESS starts occurring when an external library is linked to the Unity iOS application.
This usually happens when an external library is compiled with the ARM Thumb instruction set. Currently such libraries are not compatible with Unity. The problem can be solved easily by recompiling the library without Thumb instructions. You can do this for the library's Xcode project with the following steps:
- in Xcode, select "View" > "Navigators" > "Show Project Navigator" from the menu
- select the "Unity-iPhone" project, activate "Build Settings" tab
- in the search field enter : "Other C Flags"
- add -mno-thumb flag there and rebuild the library.
If the library source is not available you should ask the supplier for a non-thumb version of the library.
The Xcode console shows "WARNING -> applicationDidReceiveMemoryWarning()" and the application crashes immediately afterwards
(Sometimes you might see a message like Program received signal: 0.) This warning message is often not fatal and merely indicates that iOS is low on memory and is asking applications to free up some memory. Typically, background processes like Mail will free some memory and your application can continue to run. However, if your application continues to use memory or ask for more, the OS will eventually start killing applications and yours could be one of them. Apple does not document what memory usage is safe, but empirical observations show that applications using less than 50% MB of all device RAM (like ~200-256 MB for 2nd generation ipad) do not have major memory usage problems. The main metric you should rely on is how much RAM your application uses. Your application memory usage consists of three major components:
- application code (the OS needs to load and keep your application code in RAM, but some of it might be discarded if really needed)
- native heap (used by the engine to store its state, your assets, etc. in RAM)
- managed heap (used by your Mono runtime to keep C# or JavaScript objects)
- GLES driver memory pools: textures, framebuffers, compiled shaders, etc.
Your application memory usage can be tracked by two Xcode Instruments tools: Activity Monitor, Object Allocations and VM Tracker. You can start from the Xcode Run menu: Product > Profile and then select specific tool. Activity Monitor tool shows all process statistics including Real memory which can be regarded as the total amount of RAM used by your application. Note: OS and device HW version combination might noticeably affect memory usage numbers, so you should be careful when comparing numbers obtained on different devices.
Note: The internal profiler shows only the heap allocated by .NET scripts. Total memory usage can be determined via Xcode Instruments as shown above. This figure includes parts of the application binary, some standard framework buffers, Unity engine internal state buffers, the .NET runtime heap (number printed by internal profiler), GLES driver heap and some other miscellaneous stuff.
The other tool displays all allocations made by your application and includes both native heap and managed heap statistics (don't forget to check the Created and still living box to get the current state of the application). The important statistic is the Net bytes value.

To keep memory usage low:
- Reduce the application binary size by using the strongest iOS stripping options (Advanced license feature), and avoid unnecessary dependencies on different .NET libraries. See the player settings and player size optimization manual pages for further details.
- Reduce the size of your content. Use PVRTC compression for textures and use low poly models. See the manual page about reducing file size for more information.
- Don't allocate more memory than necessary in your scripts. Track mono heap size and usage with the internal profiler
- Note: with Unity 3.0, the scene loading implementation has changed significantly and now all scene assets are preloaded. This results in fewer hiccups when instantiating game objects. If you need more fine-grained control of asset loading and unloading during gameplay, you should use Resources.Load and Object.Destroy.
Querying the OS about the amount of free memory may seem like a good idea to evaluate how well your application is performing. However, the free memory statistic is likely to be unreliable since the OS uses a lot of dynamic buffers and caches. The only reliable approach is to keep track of memory consumption for your application and use that as the main metric. Pay attention to how the graphs from the tools described above change over time, especially after loading new levels.
The game runs correctly when launched from Xcode but crashes while loading the first level when launched manually on the device.
There could be several reasons for this. You need to inspect the device logs to get more details. Connect the device to your Mac, launch Xcode and select Window > Organizer from the menu. Select your device in the Organizer's left toolbar, then click on the "Console" tab and review the latest messages carefully. Additionally, you may need to investigate crash reports. You can find out how to obtain crash reports here: http://developer.apple.com/iphone/library/technotes/tn2008/tn2151.html.
The Xcode Organizer console contains the message "killed by SpringBoard".
There is a poorly-documented time limit for an iOS application to render its first frames and process input. If your application exceeds this limit, it will be killed by SpringBoard. This may happen in an application with a first scene which is too large, for example. To avoid this problem, it is advisable to create a small initial scene which just displays a splash screen, waits a frame or two with yield and then starts loading the real scene. This can be done with code as simple as the following:
function Start () {
yield;
Application.LoadLevel("Test");
}
Type.GetProperty() / Type.GetValue() cause crashes on the device
Currently Type.GetProperty() and Type.GetValue() are supported only for the .NET 2.0 Subset profile. You can select the .NET API compatibility level in the Player Settings.
Note: Type.GetProperty() and Type.GetValue() might be incompatible with managed code stripping and might need to be excluded (you can supply a custom non-strippable type list during the stripping process to accomplish this). For further details, see the iOS player size optimization guide.
The game crashes with the error message "ExecutionEngineException: Attempting to JIT compile method 'SometType`1<SomeValueType>:.ctor ()' while running with --aot-only."
The Mono .NET implementation for iOS is based on AOT (ahead of time compilation to native code) technology, which has its limitations. It compiles only those generic type methods (where a value type is used as a generic parameter) which are explicitly used by other code. When such methods are used only via reflection or from native code (ie, the serialization system) then they get skipped during AOT compilation. The AOT compiler can be hinted to include code by adding a dummy method somewhere in the script code. This can refer to the missing methods and so get them compiled ahead of time.
void _unusedMethod()
{
var tmp = new SomeType<SomeValueType>();
}
Note: value types are basic types, enums and structs.
Various crashes occur on the device when a combination of System.Security.Cryptography and managed code stripping is used
.NET Cryptography services rely heavily on reflection and so are not compatible with managed code stripping since this involves static code analysis. Sometimes the easiest solution to the crashes is to exclude the whole System.Security.Crypography namespace from the stripping process.
The stripping process can be customized by adding a custom link.xml file to the Assets folder of your Unity project. This specifies which types and namespaces should be excluded from stripping. Further details can be found in the iOS player size optimization guide.
link.xml
<linker>
<assembly fullname="mscorlib">
<namespace fullname="System.Security.Cryptography" preserve="all"/>
</assembly>
</linker>
Application crashes when using System.Security.Cryptography.MD5 with managed code stripping
You might consider advice listed above or can work around this problem by adding extra reference to specific class to your script code:
object obj = new MD5CryptoServiceProvider();
"Ran out of trampolines of type 1/2" runtime error
This error usually happens if you use lots of recursive generics. You can hint to the AOT compiler to allocate more trampolines of type 1 or type 2. Additional AOT compiler command line options can be specified in the "Other Settings" section of the Player Settings. For type 1 trampolines, specify nrgctx-trampolines=ABCD, where ABCD is the number of new trampolines required (i.e. 4096). For type 2 trampolines specify nimt-trampolines=ABCD.
After upgrading Xcode Unity iOS runtime fails with message "You are using Unity iPhone Basic. You are not allowed to remove the Unity splash screen from your game"
With some latest Xcode releases there were changes introduced in PNG compression and optimization tool. These changes might cause false positives in Unity iOS runtime checks for splash screen modifications. If you encounter such problems try upgrading Unity to the latest publicly available version. If it does not help you might consider following workaround:
- Replace your Xcode project from scratch when building from Unity (instead of appending it)
- Delete already installed project from device
- Clean project in Xcode (Product->Clean)
- Clear Xcode's Derived Data folders (Xcode->Preferences->Locations)
If this still does not help try disabling PNG re-compression in Xcode:
- Open your Xcode project
- Select "Unity-iPhone" project there
- Select "Build Settings" tab there
- Look for "Compress PNG files" option and set it to NO
App Store submission fails with "iPhone/iPod Touch: application executable is missing a required architecture. At least one of the following architecture(s) must be present: armv6" message
You might get such message when updating already existing application, which previously was submitted with armv6 support. Unity 4.x and Xcode 4.5 does not support armv6 platform anymore. To solve submission problem just set Target OS Version in Unity Player Settings to 4.3 or higher.
WWW downloads are working fine in Unity Editor and on Android, but not on iOS
Most common mistake is to assume that WWW downloads are always happening on separate thread. On some platforms this might be true, but you should not take it for granted. Best way to track WWW status is either to use yield statement or check status in Update method. You should not use busy while loops for that.
"PlayerLoop called recursively!" error occurs when using Cocoa via a native function called from a script
Some operations with the UI will result in iOS redrawing the window immediately (the most common example is adding a UIView with a UIViewController to the main UIWindow). If you call a native function from a script, it will happen inside Unity's PlayerLoop, resulting in PlayerLoop being called recursively. In such cases, you should consider using the performSelectorOnMainThread method with waitUntilDone set to false. It will inform iOS to schedule the operation to run between Unity's PlayerLoop calls.
Profiler or Debugger unable to see game running on iOS device
- Check that you have built a Development build, and ticked the "Enable Script Debugging" and "Autoconnect profiler" boxes (as appropriate).
- The application running on the device will make a multicast broadcast to 225.0.0.222 on UDP port 54997. Check that your network settings allow this traffic. Then, the profiler will make a connection to the remote device on a port in the range 55000 - 55511 to fetch profiler data from the device. These ports will need to be open for UDP access.
Missing DLLs
If your application runs ok in editor but you get errors in your iOS project this may be caused by missing DLLs (e.g. I18N.dll, I19N.West.dll). In this case, try copying those dlls from within the Unity.app to your project's Assets/Plugins folder. The location of the DLLs within the unity app is:
Unity.app/Contents/Frameworks/Mono/lib/mono/unity
You should then also check the stripping level of your project to ensure the classes in the DLLs aren't being removed when the build is optimised. Refer to the iOS Optimisation Page for more information on iOS Stripping Levels.
Xcode Debugger console reports: ExecutionEngineException: Attempting to JIT compile method '(wrapper native-to-managed) Test:TestFunc (int)' while running with --aot-only
Typically such message is received when managed function delegate is passed to the native function, but required wrapper code wasn't generated when building application. You can help AOT compiler by hinting which methods will be passed as delegates to the native code. This can be done by adding "MonoPInvokeCallbackAttribute" custom attribute. Currently only static methods can be passed as delegates to the native code.
Sample code:
using UnityEngine;
using System.Collections;
using System;
using System.Runtime.InteropServices;
using AOT;
public class NewBehaviourScript : MonoBehaviour {
[DllImport ("__Internal")]
private static extern void DoSomething (NoParamDelegate del1, StringParamDelegate del2);
delegate void NoParamDelegate ();
delegate void StringParamDelegate (string str);
[MonoPInvokeCallback (typeof (NoParamDelegate))]
public static void NoParamCallback()
{
Debug.Log ("Hello from NoParamCallback");
}
[MonoPInvokeCallback (typeof (StringParamDelegate))]
public static void StringParamCallback(string str)
{
Debug.Log (string.Format ("Hello from StringParamCallback {0}", str));
}
// Use this for initialization
void Start () {
DoSomething(NoParamCallback, StringParamCallback);
}
}

Android
Troubleshooting Android development
Unity fails to install your application to your device
- Verify that your computer can actually see and communicate with the device. See the Publishing Builds page for further details.
- Check the error message in the Unity console. This will often help diagnose the problem.
If you get an error saying "Unable to install APK, protocol failure" during a build then this indicates that the device is connected to a low-power USB port (perhaps a port on a keyboard or other peripheral). If this happens, try connecting the device to a USB port on the computer itself.
Your application crashes immediately after launch.
- Ensure that you are not trying to use NativeActivity with devices that do not support it.
- Try removing any native plugins you have.
- Try disabling stripping.
- Use adb logcat to get the crash report from your device.
Building DEX Failed
This an error which will produce a message like the following:-
Building DEX Failed! G:\Unity\JavaPluginSample\Temp/StagingArea> java -Xmx1024M -Djava.ext.dirs="G:/AndroidSDK/android-sdk_r09-windows\platform-tools/lib/" -jar "G:/AndroidSDK/android-sdk_r09-windows\platform-tools/lib/dx.jar" --dex --verbose --output=bin/classes.dex bin/classes.jar plugins Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine.
This is usually caused by having the wrong version of Java installed on your machine. Updating your Java installation to the latest version will generally solve this issue.
The game crashes after a couple of seconds when playing video
Make sure Settings->Developer Options->Don't keep activities isn't enabled on the phone. The video player is its own activity and therefore the regular game activity will be destroyed if the video player is activated.
My game quits when I press the sleep button
Change the <activity> tag in the AndroidManifest.xml to contain <android:configChanges> tag as described here.
An example activity tag might look something like this:-
<activity android:name=".AdMobTestActivity"
android:label="@string/app_name"
android:configChanges="fontScale|keyboard|keyboardHidden|locale|mnc|mcc|navigation|orientation|screenLayout|screenSize|smallestScreenSize|uiMode|touchscreen">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
iphone-bugreporting
Before submitting a bug report, please check the iOS Troubleshooting page, where you will find solutions to common crashes and other problems.
If your application crashes in the Xcode debugger then you can add valuable information to your bug report as follows:-
- Click Continue (Run->Continue) twice
- Open the debugger console (Run->Console) and enter (in the console): thread apply all bt
- Copy all console output and send it together with your bugreport.
If your application crashes on the iOS device then you should retrieve the crash report as described here on Apple's website. Please attach the crash report, your built application and console log to your bug report before submitting.
Page last updated: 2011-11-09android-GettingStarted
Building games for a device running Android OS requires an approach similar to that for iOS development. However, the hardware is not completely standardized across all devices, and this raises issues that don't occur in iOS development. There are some feature differences in the Android version of Unity just as there are with the iOS version.
Setting up your Android Developer environment
You will need to have your Android developer environment set up before you can test your Unity games on the device. This involves downloading and installing the Android SDK with the different Android plaforms and adding your physical device to your system (this is done a bit differently depending on whether you are developing on Windows or Mac). This setup process is explained on the Android developer website, and there may be additional information provided by the manufacturer of your device. Since this is a complex process, we've provided a basic outline of the tasks that must be completed before you can run code on your Android device or in the Android emulator. However, the best thing to do is follow the instructions step-by-step from the Android developer portal.
Access Android Functionality
Unity Android provides scripting APIs to access various input data and settings. You can find out more about the available classes on the Android scripting page.
Exposing Native C, C++ or Java Code to Scripts
Unity Android allows you to call custom functions written in C/C++ directly from C# scripts (Java functions can be called indirectly). To find out how to make functions from native code accessible from Unity, visit the plugins page.
Occlusion Culling
Unity includes support for occlusion culling which is a particularly valuable optimization on mobile platforms. More information can be found on the occlusion culling page.
Splash Screen Customization
The splash screen displayed while the game launches can be customized - see this page for further details.
Troubleshooting and Bug Reports
There are many reasons why your application may crash or fail to work as you expected. Our Android troubleshooting guide will help you get to the bottom of bugs as quickly as possible. If, after consulting the guide, you suspect the problem is internal to Unity then you should file a bug report - see this page for details on how to do this.
How Unity Android Differs from Desktop Unity
Strongly Typed JavaScript
For performance reasons, dynamic typing in JavaScript is always turned off in Unity Android, as if #pragma strict were applied automatically to all scripts. This is important to know if you start with a project originally developed for the desktop platforms since you may find you get unexpected compile errors when switching to Android; dynamic typing is the first thing to investigate. These errors are usually easy to fix if you make sure all variables are explicitly typed or use type inference on initialization.
ETC as Recommended Texture Compression
Although Unity Android does support DXT/PVRTC/ATC textures, Unity will decompress the textures into RGB(A) format at runtime if those compression methods are not supported by the particular device in use. This could have an impact on the GPU rendering speed and it is recommended to use the ETC format instead. ETC is the de facto standard compression format on Android, and should be supported on all post 2.0 devices. However, ETC does not support an alpha channel and RGBA 16-bit will sometimes be the best trade-off between size, quality and rendering speed where alpha is required.
It is also possible to create separate android distribution archives (.apk) for each of the DXT/PVRTC/ATC formats, and let the Android Market's filtering system select the correct archives for different devices (see Publishing Builds for Android).
Movie Playback
Movie textures are not supported on Android, but a full-screen streaming playback is provided via scripting functions. To learn about supported file formats and scripting API, consult the movie page or the Android supported media formats page.
Further Reading
- Android SDK Setup
- Android Remote
- Trouble Shooting
- Reporting crash bugs under Android
- Features currently not supported by Unity Android
- android-OBBsupport
- Player Settings
- Android Scripting
- Building Plugins for Android
- Customizing the Splash screen of Your Mobile Application
android-sdksetup
There are some steps you must follow before you can build and run any code on your Android device. This is true regardless of whether you use Unity or write Android applications from scratch.
1. Download the Android SDK
Go to the Android Developer SDK webpage. Download and unpack the latest Android SDK.
2. Installing the Android SDK
Follow the instructions under Installing the SDK (although you can freely skip the optional parts relating to Eclipse). In step 4 of Installing the SDK be sure to add at least one Android platform with API level equal to or higher than 9 (Platform 2.3 or greater), the Platform Tools, and the USB drivers if you're using Windows.
3. Get the device recognized by your system
This can be tricky, especially under Windows based systems where drivers tend to be a problem. Also, your device may come with additional information or specific drivers from the manufacturer.
- For Windows: If the Android device is automatically recognized by the system you still might need to update the drivers with the ones that came with the Android SDK. This is done through the Windows Device Manager.
- If the device is not recognized automatically use the drivers from the Android SDK, or any specific drivers provided by the manufacturer.Additional info can be found here: USB Drivers for Windows
- For Mac: If you're developing on Mac OSX then no additional drivers are usually required.
Note: Don't forget to turn on "USB Debugging" on your device. You can do this from the home screen: press MENU, select Applications > Development, then enable USB debugging.
If you are unsure whether your device is properly installed on your system, please read the trouble-shooting page for details.
4. Add the Android SDK path to Unity
The first time you build a project for Android (or if Unity later fails to locate the SDK) you will be asked to locate the folder where you installed the Android SDK (you should select the root folder of the SDK installation). The location of the Android SDK can also be changed in the editor by selecting Unity > Preferences from the menu and then clicking on External Tools in the preferences window.
Page last updated: 2012-03-24android-remote
Android Remote is a Android application that makes your device act as a remote control for the project in Unity. This is useful for rapid development when you don't want to compile and deploy your project to device for each change.
How to use Android remote
To use Android Remote, you should firstly make sure that you have the latest Android SDK installed (this is necessary to set up port-forwarding on the device). Then, connect the device to your computer with a USB cable and launch the Android Remote app. When you press Play in the Unity editor, the device will act as a remote control and will pass accelerometer and touch input events to the running game.
Page last updated: 2011-11-24android-troubleshooting
This section addresses common problems that can arise when using Unity. Each platform is dealt with separately below.

Desktop
In MonoDevelop, the Debug button is greyed out!
- This means that MonoDevelop was unable to find the Unity executable. In the MonoDevelop preferences, go to the Unity/Debugger section and then browse to where your Unity executable is located.
Is there a way to get rid of the welcome page in MonoDevelop?
- Yes. In the MonoDevelop preferences, go to the Visual Style section, and uncheck "Load welcome page on startup".
Geforce 7300GT on OSX 10.6.4
- Deferred rendering is disabled because materials are not displayed correctly for Geforce 7300GT on OX 10.6.4; This happens because of buggy video drivers.
On Windows x64, Unity crashes when my script throws a NullReferenceException
- Please apply Windows Hotfix #976038.
Graphics
Slow framerate and/or visual artifacts.
- This may occur if your video card drivers are not up to date. Make sure you have the latest official drivers from your card vendor.
Shadows
I see no shadows at all!
- Shadows are a Unity Pro only feature, so without Unity Pro you won't get shadows. Simpler shadow methods, like using a Projector, are still possible, of course.
- Shadows also require certain graphics hardware support. See Shadows page for details.
- Check if shadows are not completely disabled in Quality Settings.
- Shadows are currently not supported for Android and iOS mobile platforms.
Some of my objects do not cast or receive shadows
An object's Renderer must have Receive Shadows enabled for shadows to be rendered onto it. Also, an object must have Cast Shadows enabled in order to cast shadows on other objects (both are on by default).
Only opaque objects cast and receive shadows. This means that objects using the built-in Transparent or Particle shaders will not cast shadows. In most cases it is possible to use Transparent Cutout shaders for objects like fences, vegetation, etc. If you use custom written Shaders, they have to be pixel-lit and use the Geometry render queue. Objects using VertexLit shaders do not receive shadows but are able to cast them.
Only Pixel lights cast shadows. If you want to make sure that a light always casts shadows no matter how many other lights are in the scene, then you can set it to Force Pixel render mode (see the Light reference page).

iOS
Troubleshooting on iOS devices
There are some situations with iOS where your game can work perfectly in the Unity editor but then doesn't work or maybe doesn't even start on the actual device. The problems are often related to code or content quality. This section describes the most common scenarios.
The game stops responding after a while. Xcode shows "interrupted" in the status bar.
There are a number of reasons why this may happen. Typical causes include:
- Scripting errors such as using uninitialized variables, etc.
- Using 3rd party Thumb compiled native libraries. Such libraries trigger a known problem in the iOS SDK linker and might cause random crashes.
- Using generic types with value types as parameters (eg, List<int>, List<SomeStruct>, List<SomeEnum>, etc) for serializable script properties.
- Using reflection when managed code stripping is enabled.
- Errors in the native plugin interface (the managed code method signature does not match the native code function signature).
Information from the XCode Debugger console can often help detect these problems (Xcode menu: View > Debug Area > Activate Console).
The Xcode console shows "Program received signal: “SIGBUS” or EXC_BAD_ACCESS error.
This message typically appears on iOS devices when your application receives a NullReferenceException. There two ways to figure out where the fault happened:
Managed stack traces
Since version 3.4 Unity includes software-based handling of the NullReferenceException. The AOT compiler includes quick checks for null references each time a method or variable is accessed on an object. This feature affects script performance which is why it is enabled only for development builds (for basic license users it is enough to enable the "development build" option in the Build Settings dialog, while iOS pro license users additionally need to enable the "script debugging" option). If everything was done right and the fault actually is occurring in .NET code then you won't see EXC_BAD_ACCESS anymore. Instead, the .NET exception text will be printed in the Xcode console (or else your code will just handle it in a "catch" statement). Typical output might be:
Unhandled Exception: System.NullReferenceException: A null value was found where an object instance was required. at DayController+$handleTimeOfDay$121+$.MoveNext () [0x0035a] in DayController.js:122
This indicates that the fault happened in the handleTimeOfDay method of the DayController class, which works as a coroutine. Also if it is script code then you will generally be told the exact line number (eg, "DayController.js:122"). The offending line might be something like the following:
Instantiate(_imgwww.assetBundle.mainAsset);
This might happen if, say, the script accesses an asset bundle without first checking that it was downloaded correctly.
Native stack traces
Native stack traces are a much more powerful tool for fault investigation but using them requires some expertise. Also, you generally can't continue after these native (hardware memory access) faults happen. To get a native stack trace, type bt all into the Xcode Debugger Console. Carefully inspect the printed stack traces - they may contain hints about where the error occurred. You might see something like:
... Thread 1 (thread 11523): #0 0x006267d0 in m_OptionsMenu_Start () #1 0x002e4160 in wrapper_runtime_invoke_object_runtime_invoke_void__this___object_intptr_intptr_intptr () #2 0x00a1dd64 in mono_jit_runtime_invoke (method=0x18b63bc, obj=0x5d10cb0, params=0x0, exc=0x2fffdd34) at /Users/mantasp/work/unity/unity-mono/External/Mono/mono/mono/mini/mini.c:4487 #3 0x0088481c in MonoBehaviour::InvokeMethodOrCoroutineChecked () ...
First of all you should find the stack trace for "Thread 1", which is the main thread. The very first lines of the stack trace will point to the place where the error occurred. In this example, the trace indicates that the NullReferenceException happened inside the "OptionsMenu" script's "Start" method. Looking carefully at this method implementation would reveal the cause of the problem. Typically, NullReferenceExceptions happen inside the Start method when incorrect assumptions are made about initialization order. In some cases only a partial stack trace is seen on the Debugger Console:
Thread 1 (thread 11523): #0 0x0062564c in start ()
This indicates that native symbols were stripped during the Release build of the application. The full stack trace can be obtained with the following procedure:
- Remove application from device.
- Clean all targets.
- Build and run.
- Get stack traces again as described above.
EXC_BAD_ACCESS starts occurring when an external library is linked to the Unity iOS application.
This usually happens when an external library is compiled with the ARM Thumb instruction set. Currently such libraries are not compatible with Unity. The problem can be solved easily by recompiling the library without Thumb instructions. You can do this for the library's Xcode project with the following steps:
- in Xcode, select "View" > "Navigators" > "Show Project Navigator" from the menu
- select the "Unity-iPhone" project, activate "Build Settings" tab
- in the search field enter : "Other C Flags"
- add -mno-thumb flag there and rebuild the library.
If the library source is not available you should ask the supplier for a non-thumb version of the library.
The Xcode console shows "WARNING -> applicationDidReceiveMemoryWarning()" and the application crashes immediately afterwards
(Sometimes you might see a message like Program received signal: 0.) This warning message is often not fatal and merely indicates that iOS is low on memory and is asking applications to free up some memory. Typically, background processes like Mail will free some memory and your application can continue to run. However, if your application continues to use memory or ask for more, the OS will eventually start killing applications and yours could be one of them. Apple does not document what memory usage is safe, but empirical observations show that applications using less than 50% MB of all device RAM (like ~200-256 MB for 2nd generation ipad) do not have major memory usage problems. The main metric you should rely on is how much RAM your application uses. Your application memory usage consists of three major components:
- application code (the OS needs to load and keep your application code in RAM, but some of it might be discarded if really needed)
- native heap (used by the engine to store its state, your assets, etc. in RAM)
- managed heap (used by your Mono runtime to keep C# or JavaScript objects)
- GLES driver memory pools: textures, framebuffers, compiled shaders, etc.
Your application memory usage can be tracked by two Xcode Instruments tools: Activity Monitor, Object Allocations and VM Tracker. You can start from the Xcode Run menu: Product > Profile and then select specific tool. Activity Monitor tool shows all process statistics including Real memory which can be regarded as the total amount of RAM used by your application. Note: OS and device HW version combination might noticeably affect memory usage numbers, so you should be careful when comparing numbers obtained on different devices.
Note: The internal profiler shows only the heap allocated by .NET scripts. Total memory usage can be determined via Xcode Instruments as shown above. This figure includes parts of the application binary, some standard framework buffers, Unity engine internal state buffers, the .NET runtime heap (number printed by internal profiler), GLES driver heap and some other miscellaneous stuff.
The other tool displays all allocations made by your application and includes both native heap and managed heap statistics (don't forget to check the Created and still living box to get the current state of the application). The important statistic is the Net bytes value.

To keep memory usage low:
- Reduce the application binary size by using the strongest iOS stripping options (Advanced license feature), and avoid unnecessary dependencies on different .NET libraries. See the player settings and player size optimization manual pages for further details.
- Reduce the size of your content. Use PVRTC compression for textures and use low poly models. See the manual page about reducing file size for more information.
- Don't allocate more memory than necessary in your scripts. Track mono heap size and usage with the internal profiler
- Note: with Unity 3.0, the scene loading implementation has changed significantly and now all scene assets are preloaded. This results in fewer hiccups when instantiating game objects. If you need more fine-grained control of asset loading and unloading during gameplay, you should use Resources.Load and Object.Destroy.
Querying the OS about the amount of free memory may seem like a good idea to evaluate how well your application is performing. However, the free memory statistic is likely to be unreliable since the OS uses a lot of dynamic buffers and caches. The only reliable approach is to keep track of memory consumption for your application and use that as the main metric. Pay attention to how the graphs from the tools described above change over time, especially after loading new levels.
The game runs correctly when launched from Xcode but crashes while loading the first level when launched manually on the device.
There could be several reasons for this. You need to inspect the device logs to get more details. Connect the device to your Mac, launch Xcode and select Window > Organizer from the menu. Select your device in the Organizer's left toolbar, then click on the "Console" tab and review the latest messages carefully. Additionally, you may need to investigate crash reports. You can find out how to obtain crash reports here: http://developer.apple.com/iphone/library/technotes/tn2008/tn2151.html.
The Xcode Organizer console contains the message "killed by SpringBoard".
There is a poorly-documented time limit for an iOS application to render its first frames and process input. If your application exceeds this limit, it will be killed by SpringBoard. This may happen in an application with a first scene which is too large, for example. To avoid this problem, it is advisable to create a small initial scene which just displays a splash screen, waits a frame or two with yield and then starts loading the real scene. This can be done with code as simple as the following:
function Start () {
yield;
Application.LoadLevel("Test");
}
Type.GetProperty() / Type.GetValue() cause crashes on the device
Currently Type.GetProperty() and Type.GetValue() are supported only for the .NET 2.0 Subset profile. You can select the .NET API compatibility level in the Player Settings.
Note: Type.GetProperty() and Type.GetValue() might be incompatible with managed code stripping and might need to be excluded (you can supply a custom non-strippable type list during the stripping process to accomplish this). For further details, see the iOS player size optimization guide.
The game crashes with the error message "ExecutionEngineException: Attempting to JIT compile method 'SometType`1<SomeValueType>:.ctor ()' while running with --aot-only."
The Mono .NET implementation for iOS is based on AOT (ahead of time compilation to native code) technology, which has its limitations. It compiles only those generic type methods (where a value type is used as a generic parameter) which are explicitly used by other code. When such methods are used only via reflection or from native code (ie, the serialization system) then they get skipped during AOT compilation. The AOT compiler can be hinted to include code by adding a dummy method somewhere in the script code. This can refer to the missing methods and so get them compiled ahead of time.
void _unusedMethod()
{
var tmp = new SomeType<SomeValueType>();
}
Note: value types are basic types, enums and structs.
Various crashes occur on the device when a combination of System.Security.Cryptography and managed code stripping is used
.NET Cryptography services rely heavily on reflection and so are not compatible with managed code stripping since this involves static code analysis. Sometimes the easiest solution to the crashes is to exclude the whole System.Security.Crypography namespace from the stripping process.
The stripping process can be customized by adding a custom link.xml file to the Assets folder of your Unity project. This specifies which types and namespaces should be excluded from stripping. Further details can be found in the iOS player size optimization guide.
link.xml
<linker>
<assembly fullname="mscorlib">
<namespace fullname="System.Security.Cryptography" preserve="all"/>
</assembly>
</linker>
Application crashes when using System.Security.Cryptography.MD5 with managed code stripping
You might consider advice listed above or can work around this problem by adding extra reference to specific class to your script code:
object obj = new MD5CryptoServiceProvider();
"Ran out of trampolines of type 1/2" runtime error
This error usually happens if you use lots of recursive generics. You can hint to the AOT compiler to allocate more trampolines of type 1 or type 2. Additional AOT compiler command line options can be specified in the "Other Settings" section of the Player Settings. For type 1 trampolines, specify nrgctx-trampolines=ABCD, where ABCD is the number of new trampolines required (i.e. 4096). For type 2 trampolines specify nimt-trampolines=ABCD.
After upgrading Xcode Unity iOS runtime fails with message "You are using Unity iPhone Basic. You are not allowed to remove the Unity splash screen from your game"
With some latest Xcode releases there were changes introduced in PNG compression and optimization tool. These changes might cause false positives in Unity iOS runtime checks for splash screen modifications. If you encounter such problems try upgrading Unity to the latest publicly available version. If it does not help you might consider following workaround:
- Replace your Xcode project from scratch when building from Unity (instead of appending it)
- Delete already installed project from device
- Clean project in Xcode (Product->Clean)
- Clear Xcode's Derived Data folders (Xcode->Preferences->Locations)
If this still does not help try disabling PNG re-compression in Xcode:
- Open your Xcode project
- Select "Unity-iPhone" project there
- Select "Build Settings" tab there
- Look for "Compress PNG files" option and set it to NO
App Store submission fails with "iPhone/iPod Touch: application executable is missing a required architecture. At least one of the following architecture(s) must be present: armv6" message
You might get such message when updating already existing application, which previously was submitted with armv6 support. Unity 4.x and Xcode 4.5 does not support armv6 platform anymore. To solve submission problem just set Target OS Version in Unity Player Settings to 4.3 or higher.
WWW downloads are working fine in Unity Editor and on Android, but not on iOS
Most common mistake is to assume that WWW downloads are always happening on separate thread. On some platforms this might be true, but you should not take it for granted. Best way to track WWW status is either to use yield statement or check status in Update method. You should not use busy while loops for that.
"PlayerLoop called recursively!" error occurs when using Cocoa via a native function called from a script
Some operations with the UI will result in iOS redrawing the window immediately (the most common example is adding a UIView with a UIViewController to the main UIWindow). If you call a native function from a script, it will happen inside Unity's PlayerLoop, resulting in PlayerLoop being called recursively. In such cases, you should consider using the performSelectorOnMainThread method with waitUntilDone set to false. It will inform iOS to schedule the operation to run between Unity's PlayerLoop calls.
Profiler or Debugger unable to see game running on iOS device
- Check that you have built a Development build, and ticked the "Enable Script Debugging" and "Autoconnect profiler" boxes (as appropriate).
- The application running on the device will make a multicast broadcast to 225.0.0.222 on UDP port 54997. Check that your network settings allow this traffic. Then, the profiler will make a connection to the remote device on a port in the range 55000 - 55511 to fetch profiler data from the device. These ports will need to be open for UDP access.
Missing DLLs
If your application runs ok in editor but you get errors in your iOS project this may be caused by missing DLLs (e.g. I18N.dll, I19N.West.dll). In this case, try copying those dlls from within the Unity.app to your project's Assets/Plugins folder. The location of the DLLs within the unity app is:
Unity.app/Contents/Frameworks/Mono/lib/mono/unity
You should then also check the stripping level of your project to ensure the classes in the DLLs aren't being removed when the build is optimised. Refer to the iOS Optimisation Page for more information on iOS Stripping Levels.
Xcode Debugger console reports: ExecutionEngineException: Attempting to JIT compile method '(wrapper native-to-managed) Test:TestFunc (int)' while running with --aot-only
Typically such message is received when managed function delegate is passed to the native function, but required wrapper code wasn't generated when building application. You can help AOT compiler by hinting which methods will be passed as delegates to the native code. This can be done by adding "MonoPInvokeCallbackAttribute" custom attribute. Currently only static methods can be passed as delegates to the native code.
Sample code:
using UnityEngine;
using System.Collections;
using System;
using System.Runtime.InteropServices;
using AOT;
public class NewBehaviourScript : MonoBehaviour {
[DllImport ("__Internal")]
private static extern void DoSomething (NoParamDelegate del1, StringParamDelegate del2);
delegate void NoParamDelegate ();
delegate void StringParamDelegate (string str);
[MonoPInvokeCallback (typeof (NoParamDelegate))]
public static void NoParamCallback()
{
Debug.Log ("Hello from NoParamCallback");
}
[MonoPInvokeCallback (typeof (StringParamDelegate))]
public static void StringParamCallback(string str)
{
Debug.Log (string.Format ("Hello from StringParamCallback {0}", str));
}
// Use this for initialization
void Start () {
DoSomething(NoParamCallback, StringParamCallback);
}
}

Android
Troubleshooting Android development
Unity fails to install your application to your device
- Verify that your computer can actually see and communicate with the device. See the Publishing Builds page for further details.
- Check the error message in the Unity console. This will often help diagnose the problem.
If you get an error saying "Unable to install APK, protocol failure" during a build then this indicates that the device is connected to a low-power USB port (perhaps a port on a keyboard or other peripheral). If this happens, try connecting the device to a USB port on the computer itself.
Your application crashes immediately after launch.
- Ensure that you are not trying to use NativeActivity with devices that do not support it.
- Try removing any native plugins you have.
- Try disabling stripping.
- Use adb logcat to get the crash report from your device.
Building DEX Failed
This an error which will produce a message like the following:-
Building DEX Failed! G:\Unity\JavaPluginSample\Temp/StagingArea> java -Xmx1024M -Djava.ext.dirs="G:/AndroidSDK/android-sdk_r09-windows\platform-tools/lib/" -jar "G:/AndroidSDK/android-sdk_r09-windows\platform-tools/lib/dx.jar" --dex --verbose --output=bin/classes.dex bin/classes.jar plugins Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine.
This is usually caused by having the wrong version of Java installed on your machine. Updating your Java installation to the latest version will generally solve this issue.
The game crashes after a couple of seconds when playing video
Make sure Settings->Developer Options->Don't keep activities isn't enabled on the phone. The video player is its own activity and therefore the regular game activity will be destroyed if the video player is activated.
My game quits when I press the sleep button
Change the <activity> tag in the AndroidManifest.xml to contain <android:configChanges> tag as described here.
An example activity tag might look something like this:-
<activity android:name=".AdMobTestActivity"
android:label="@string/app_name"
android:configChanges="fontScale|keyboard|keyboardHidden|locale|mnc|mcc|navigation|orientation|screenLayout|screenSize|smallestScreenSize|uiMode|touchscreen">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
android-bugreporting
Before submitting a bug with just "it crashes" in the message body, please look through the Troubleshooting Android development page first.
At this point there are no advanced debug tools to investigate on-device app crashes. However you can use adb application (found under Android-SDK/platform-tools) with logcat parameter. It prints status reports from your device. These reports may include information related to the occurred crash.
If you are sure that the crash you're experiencing happens due to a bug in Unity software, please save the adb logcat output, conduct a repro project and use the bugreporter () to inform us about it. We will get back to you as soon as we can.
Page last updated: 2011-02-25android-unsupported
Graphics
- Non-square textures are not supported by the ETC format.
- Movie Textures are not supported, use a full-screen streaming playback instead. Please see the Movie playback page for more information.
Scripting
- OnMouseEnter, OnMouseOver, OnMouseExit, OnMouseDown, OnMouseUp, and OnMouseDrag events are not supported on Android.
- Dynamic features like Duck Typing are not supported. Use #pragma strict for your scripts to force the compiler to report dynamic features as errors.
- Video streaming via WWW class is not supported.
android-OBBsupport
This is the deal with .apks and .obbs, in Unity 4.0 and in general.
- For any application (.apk) bigger than 50MBs, the .apk has to be split up if you want to publish it on Google Play.
- The Split App option in 4.0 will create an .apk that includes all the binary code (java, native, plugins etc) AND the first level/scene of your Unity project.
Everything else (all additional scenes, resources, streaming assets etc) are placed in the .obb.
- When starting an .apk built with Split App enabled the application will check to see if it can access the .obb file from it's position on the sdcard (location explained in the Apk Expansion docs from Google). Obviously, if the .obb can't be found only the first level can accessed (since the rest of the data is in the .obb). The first level is then required to make the .obb available on sdcard, before the application can proceed to subsequent levels. If the .obb is found the Application.dataPath will switch from .apk path, to instead point to .obb. Downloading the .obb is then not necessary.
- The contents of the .obb are never used manually. Always treat the .apk+.obb as a unique bundle, the same way you would treat a single big .apk.
This is what's in Unity 4.0. The Split App option is not the only way to split an .apk into .apk/.obb (you can use 3rd party plugins/asset bundles/whatever), but it's the only automatic splitting officially supported.
Now to the downloading of the .obb.
- The .obb may (but it's not required, in its current form at least) to be hosted on the Google Play servers.
- If the .obb is published together with the .apk on Google Play, you must also include code to download the .obb. (for those devices that require it, and for scenarios where the .obb is lost)
- The asset store has a plugin (adapted from the Google Apk Expansion examples) which does this for you. It will download the .obb and put it in the right place on the sdcard.
- When using the asset store plugin you obviously need to call that plugin from the first scene (as explained above).
- The asset store plugin can also be used to download .obb's created in some other way (single data file, a zip of asset bundles, etc) - it's agnostic to how the .obb was created.
- This is only the first version of the automatic .obb split support.
- There will be improvements, and we have a long list of things we want to add..
Android Player Settings
Unityで構築するゲームの最終版のために様々なパラメータを(プラットフォーム固有の)定義する場所がPlayer Settings(プレイヤー設定)です。例えば、これらの値の一部は、スタンドアロンゲームを開いたときに起動するResolution Dialogで使用されているものもあれば、XcodeでiOSデバイス用のゲームを構築する際に使用されるものもあるので、それらを正しく記入することが重要です。
Player Settingsを見るにはメニューでを選択します。

作成した全てのプロジェクトに適用されるグローバル設定'
| Cross-Platform Properties | |
|---|---|
| Company Name | 会社の名前設定ファイルのロケーションとして使用 |
| Product Name | ゲーム実行時にメニューバーに表示される名前であり、設定ファイルのロケーションとして使用。 |
| Default Icon | 全てのプラットフォームでアプリケーションが使用するデフォルトのアイコン(プラットフォーム固有のニーズに合わせて後でこれを上書きすることができます) |
| Default Cursor | サポートされるすべてのプラットフォーム上でアプリケーションが使用するデフォルトのカーソル。 |
| Cursor Hotspot | カーソルのホットスポット位置をデフォルトのカーソルの左上隅からピクセル単位で指定 |
Per-Platform Settings

Desktop
Web Player
Resolution And Presentation(解像度およびスクリーン)

| Resolution | |
| Default Screen Width | 生成されるWeb Playerのスクリーン幅 |
| Default Screen Height | 生成されるWeb Playerのスクリーン高さ |
| Run in background | Web Playerがフォーカスを失った場合もゲームの実行を止めたくない場合にチェック |
| WebPlayer Template | 詳細については"Using WebPlayer templates page" をチェックする必要があります、本項では各々の内臓、カスタムのテンプレートについては、アイコンで示しています。 |
Icon(アイコン)

アイコンはWebPlayerビルドでは無効(Player Settingsの各ネイティブ クライアント ビルドのセクションでアイコンを設定できます)
Other Settings

| Rendering | |
| Rendering Path | このプロパティは、スタンドアロンおよびWebPlayerコンテンツ間で共有されます。 |
| Vertex Lit | ライティング再現性はもっとも低く、シャドウはサポートしていない。古いマシンや限られたモバイルプラットフォーム上での使用に最適。 |
| Forward with Shaders | ライティング機能のサポートは良い、シャドウはサポートは限定的 |
| Deferred Lighting | ライティングとシャドウ機能は、最良のサポートを提供するものの、ハードウェアで一定水準のサポートが必要。多くのリアルタイムのライトがある場合に最適。Unity Proのみ。 |
| Color Space | レンダリングに使用する色空間 |
| GammaSpace Rendering | レンダリングのガンマ補正 |
| Linear Rendering Hardware Sampling | レンダリングを線形空間で実行 |
| Use Direct3D 11 | レンダリングにDirect3D 11を使用。 |
| Static Batching | ビルドでStatic Batchを使用する場合に設定(WebPlayerではデフォルトで無効))。Unity Proのみ。 |
| Dynamic Batching | ビルドでDynamic Batchを使用する場合に設定(デフォルトで有効) 。 |
| Streaming | |
| First Streamed Level | Streamed Web Playerを公開する場合、Resources.Loadアセットにアクセスできる最初のレベルのインデックス |
| Configuration | |
| Scripting Define Symbols | カスタム コンパイル フラグ(詳細はplatform dependent compilationのページ を参照)。 |
| Optimization | |
| Optimize Mesh Data | メッシュについて、それが適用されたマテリアルが必要としていないデータを全て取り除く(接線、法線、色、UV) |
Standalone(スタンドアロン)
''Resolution And Presentation'(解像度およびスクリーン)'

| Resolution | |
| Default Screen Width | スタンドアロン ゲームがデフォルトで使用するスクリーン幅。 |
| Default Screen Height | Playerががデフォルトで使用するスクリーンの高さ。 |
| Run in background | フォーカスを失った場合もゲームの実行を止めたくない場合にチェック。 |
| Standalone Player Options | |
| Default is Full Screen | ゲーム起動時にデフォルトでフルスクリーンモードとしたい場合、これをチェックします。 |
| Capture Single Screen | オンにした場合、スタンドアロンゲームはフルスクリーンモードのゲームはマルチモニターの設定において2つめのモニタを暗くしません。 |
| DisplayResolution Dialog | |
| Disabled | ゲーム開始時にResolution (解像度)ダイアログを表示しない。 |
| Enabled | ゲーム開始時にResolution (解像度)ダイアログを表示。 |
| Hidden by default | Resolution(解像度)Playerを、ゲーム起動時に "Alt"キーを押した場合のみ表示。 |
| Use Player Log | デバッグ情報を含むログを書きこむ。Mac App Storeに申請を提出する予定がある場合は、このオプションのチェック解除しておきます。デフォルトはチェックがオンになってます。 |
| Resizable Window | スタンドアロンのFlash Playerウィンドウサイズをユーザーで変更できるようにします。 |
| Mac App Store Validation | MacのApp StoreのReceipt Validationを有効化。 |
| Mac Fullscreen Mode | Macビルドでフルスクリーンモードのオプション。 |
| Capture Display | Unityがディスプレイ全体をコントロール(すなわち、他のアプリでGUIが表示されず、ユーザーはフルスクリーンモードを終了するまでアプリを切り替えることはできません)。 |
| Fullscreen Window | Unityが画面全体を覆うデスクトップの解像度でウィンドウ実行します。他のアプリのGUIは正しく表示され、OSX 10.7以上でCmd +Tabまたはトラックパッドのジェスチャーでアプリを切り替えることが可能。 |
| Fullscreen Window with Menu Bar and Dock | フルスクリーンウィンドウモードと同様だが、標準のメニューバーとDockにも表示。 |
| Supported Aspect Ratios | Resolutionダイアログで選択できるアスペクト比は、このリストで有効なアイテムであり、モニターがサポートする解像度になります。 |
Icon(アイコン)

| Override for Standalone | スタンドアロンのゲームに使用したいカスタムアイコンを割り当る場合はオンにします。異なるサイズのアイコンを、以下の箱の中に収めます。 |
Splash Image(スプラッシュ画像)

| Config Dialog Banner | ゲーム開始時に表示されるカスタムのスプラッシュ画像を追加。 |
''他の設定

| Rendering | |
| Rendering Path | このプロパティは、スタンドアロンおよびWebPlayerコンテンツ間で共有されます。 |
| Vertex Lit | ライティング再現性はもっとも低く、シャドウはサポートしていない。古いマシンや限られたモバイルプラットフォーム上での使用に最適。 |
| Forward with Shaders | ライティング機能のサポートは良い、シャドウのサポートは限定的 |
ライティングとシャドウ機能は、最良のサポートを提供するものの、ハードウェアで一定水準のサポートが必要。 多くのリアルタイムのライトがある場合に最適。Unity Proのみ。||
| Color Space | レンダリングに使用する色空間。 | |
| GammaSpace Rendering | レンダリングのガンマ補正。 | |
| Linear Rendering Hardware Sampling | レンダリングを線形空間で実行 | |
| Static Batching | ビルドでStatic Batchを使用する場合に設定(WebPlayerではデフォルトで無効))。Unity Proのみ。 | |
| Dynamic Batching | ビルドでDynamic Batchを使用する場合に設定(デフォルトで有効) 。 | |
| Configuration | ||
| Scripting Define Symbols | カスタム コンパイル フラグ(詳細はplatform dependent compilationのページ を参照)。 | |
| Optimization | ||
| API Compatibility Level | ||
| .Net 2.0 | .Net 2.0ライブラリ。.Net互換性が最大、ファイルサイズ最大。 | |
| .Net 2.0 Subset | .NET互換性は全体の一部、ファイルサイズは小さく。 | |
| Optimize Mesh Data | メッシュについて、それが適用されたマテリアルが必要としていないデータを全て取り除く(接線、法線、色、UV) | |

iOS
Resolution And Presentation(解像度およびスクリーン)

| Resolution | |
| Default Orientation | (このプロパティは、iOSおよびAndroid間で共有されます。) |
| Portrait | デバイスはPortraitモード、持ち方は縦向きでホームボタンが下。 |
| Portrait Upside Down | デバイスはPortraitモードで逆さま、持ち方は縦向きでホームボタンが上。 |
| Landscape Right | デバイスはLandscapeモード、持ち方は横向きでホームボタンが左。 |
| Landscape Left | デバイスはLandscapeモード、持ち方は横向きでホームボタンが右。 |
| Auto Rotation | 画面の向きが自動的に物理デバイスの向きに基づいて設定されます。 |
| Auto Rotation settings | |
| Use Animated Autorotation | チェックがオンの場合、向きの変更が適用される。Default orientationが Auto Rotation.の場合のみ適用。 |
| Auto Rotationで許容される解像度 | |
| Portrait | オンのときはPortraitモードを許可。Default OrientationがAuto Rotationに設定されている場合にのみ適用。 |
| Portrait Upside Down | オンのときはPortraitモード(逆さま)を許可。デフォルトの向きはDefault OrientationがAuto Rotationに設定されている場合にのみ適用。 |
| Landscape Right | オンのときはLandscapeモード(ホームボタンが左)を許可。Default orientationが Auto Rotation.の場合のみ適用。 |
| Landscape Left | オンのときはLandscapeモード(ホームボタンが左)を許可。Default OrientationがAuto Rotationに設定されている場合にのみ適用。 |
| Status Bar | |
| Status Bar Hidden | アプリケーションの起動時にステータスバーが最初に隠されているかどうかを指定します。 |
| Status Bar Style | アプリケーションの起動時の、ステータスバーのスタイルを指定します |
| Default | |
| Black Translucent | |
| Black Opaque | |
| Use 32-bit Display Buffer | 32ビットカラー値を保持するためにディスプレイバッファを作成するよう指定(デフォルトでは16ビット)。使用するのは、バンディングが見られる場合や、ImageEffects(画像効果)でアルファを必要とする場合であり、理由はディスプレイバッファと同じ形式にRTを作成するためです。 |
| Show Loading Indicator | ローディング インジケータのオプション |
| Don't Show | インジケータなし。 |
| White Large | 白色で大きいインジケータを表示。 |
| White | 白色で通常の大きさのインジケータを表示。 |
| Gray | 灰色で通常サイズのインジケータを表示。 |
Icon(アイコン)

| Override for iOS | iPhone / iPadのゲームに使用したいカスタムアイコンを割り当てる場合オンにします。異なるサイズのアイコンを、以下の箱の中に収めます。 |
| Prerendered icon | オフの場合iOSはアプリケーションアイコンに光沢やベベル効果を適用します。 |
Splash Image(スプラッシュ画像)

| Mobile Splash Screen (Unity Proのみ) | iOSのスプラッシュ画面に使用されるべきテクスチャを指定。標準のスプラッシュ画面サイズは320×480になります。(このプロパティは、iOSおよびAndroid間で共有されます。)| | |
| High Res. iPhone (Unity Proのみ) | iOSの第四世代デバイスのスプラッシュ画面に使用されるべきテクスチャを指定。スプラッシュ画面サイズは640x960。 |
| iPad Portrait (Unity Proのみ) | iPadの縦向きのスプラッシュ画面として使用されるべきテクスチャーを指定。標準のスプラッシュ画面サイズは768x1024。 |
| High Res. iPad Portrait | iPadの縦向きの高解像度スプラッシュ画面として使用されるべきテクスチャーを指定。標準のスプラッシュ画面サイズは1536x2048。 |
| iPad Landscape (Unity Proのみ) | iPadの横向きのスプラッシュ画面として使用されるべきテクスチャーを指定。標準のスプラッシュ画面サイズは1024x768。 |
| High res. iPad Landscape (Unity Proのみ) | iPad横向きの高解像度スプラッシュ画面として使用されるべきであるテクスチャーを指定。標準のスプラッシュ画面サイズは2048×1536。 |
Other Settings(他の設定)

レンダリング
| Static Batching | ビルドでStatic Batchを使用する場合に設定(デフォルトで有効))。Unity Proのみ。 | |
| Dynamic Batching | ビルドでDynamic Batchを使用する場合に設定(デフォルトで有効)。 | |
| Identification | ||
| Bundle Identifier | お使いのApple Developer Networkのアカウントからプロビジョニング証明書で使用される文字列(これはiOSとAndroidの間で共有されます)。 | |
| Bundle Version | バンドルのビルドバージョン番号を指定、ビルドバージョンが上がったことを示す(リリースされたかどうかにかかわらず)。単調に増加する、ピリオドで区切られた一つ以上の数字。 | |
| Configuration | ||
| Target Device | アプリケーションのターゲット デバイスの種類を指定。 | |
| iPhone Only | アプリケーションのターゲット デバイスをiPhoneのみとします。 | |
| iPad Only | アプリケーションのターゲット デバイスをiPadのみとします。 | |
| iPhone + iPad | アプリケーションのターゲット デバイスをiPadおよびiPhoneとします。 | |
| Target Resolution | デプロイしたデバイスで使用したい解像度。(この設定は、480×320の最大解像度を持つデバイスには何の影響もありません。 | |
| Native(Default Device Resolution) | デバイスのネイティブ解像度を使用します。 | |
| Auto (Best Performance) | 解像度を自動選択し、グラフィック品質より性能を重視。 | |
| Auto (Best Quality) | 解像度を自動選択し、性能よりグラフィック品質を重視。 | |
| 320p (iPhone) | Retina以前 iPhone ディスプレイ | |
| 640p (iPhone Retina Display) | iPhone Retinaディスプレイ | |
| 768p (iPad) | iPadディスプレイ。 | |
| Graphics Level | OpenGLバージョン。 | |
| OpenGL ES 1.x | OpenGL ES 1.xバージョン。 | |
| OpenGL ES 2.0 | OpenGL ES 2.0。 | |
| Accelerometer Frequency | 加速度計のサンプリング頻度。 | |
| Disabled | 加速度はサンプリングされません。 | |
| 15Hz | 毎秒15サンプル。 | |
| 30Hz | 毎秒30サンプル。 | |
| 60Hz | 毎秒60サンプル。 | |
| 100Hz | 毎秒100サンプル。 | |
| Override iPod Music | オンの場合、アプリケーションはユーザーのiPodの音楽を消音します。オフの場合、ユーザーのiPodの音楽はバックグラウンドで再生され続けます。 | |
| UI Requires Persistent WiFi | アプリケーションで、Wi-Fi接続が必要かどうかを指定。アプリケーションの実行中にiOSがアクティブなWi-Fi接続を切断せずに維持します。 | |
| Exit on Suspend | マルチタスクをサポートするiOSのバージョンの場合、バックグラウンドにサスペンドされた際にアプリケーションが終了するかを指定。 | |
| Scripting Define Symbols | カスタム コンパイル フラグ(詳細はプラットフォーム依存のコンパイル を参照)。 | |
| Optimization(最適化) | ||
| Api Compatibility Level | アクティブ.NET APIのプロフィールを指定。 | |
| .Net 2.0 | .Net 2.0ライブラリ。.Net互換性が最大、ファイルサイズ最大。 | |
| .Net 2.0 Subset | .NET互換性は全体の一部、ファイルサイズは小さく。 | |
| AOT compilation options | 追加のAOTコンパイラ オプション。 | |
| SDK Version | XcodeでビルドするiPhone OSのSDKバージョンを指定 | |
| Device SDK | 実際のハードウェア上で実行するためのSDK。 | |
| Simulator SDK | シミュレータ上でのみ実行するためのSDK。 | |
| Target iOS Version | 最終的なアプリケーションを実行することができる最も古いiOSのバージョンを指定し、iOS4.0-6.0の範囲。 | |
| Stripping Level (Unity Proのみ) | ビルドされたプレーヤーのファイル容量を小さくするためスクリプト機能の一部を削減するオプション(この設定はiOSとAndroidプラットフォームの間で共有されます) | |
| Disabled | 削減は行われません。 | |
| Strip Assemblies | レベル1の削減。 | |
| Strip ByteCode | レベル2の削減(レベル1からの削減を含む)。 | |
| Use micro mscorlib | レベル3の削減(レベル1、2からの削減を含む)。 | |
| Script Call Optimization | 実行時に速度向上のために例外処理を無効にするオプション。 | |
| Slow and Safe | 完全な例外処理がデバイス上で行われ、若干パフォーマンスに影響します。 | |
| Fast but no Exceptions | デバイス上の例外データが提供されず、ゲーム実行速度を高めます。 | |
| Optimize Mesh Data | メッシュについて、それが適用されたマテリアルが必要としていないデータを全て取り除く(接線、法線、色、UV)。 | |
注意: 例えば、iPhone OS 3.2用にビルドし、Xcode上でSimulator 3.2を選択した場合、エラーが大量に発生します。Unityエディタで必ず 適切なターゲットSDKを選択してください。

Android
Resolution And Presentation(解像度およびスクリーン)

''Androidビルドのプロジェクト向けの解像度とスクリーン'
| Resolution | |
| Default Orientation | (このプロパティは、iOSおよびAndroid間で共有されます。) |
| Portrait | デバイスはPortraitモード、持ち方は縦向きでホームボタンが下。 |
| Portrait Upside Down | デバイスはPortraitモードで逆さま、持ち方は縦向きでホームボタンが上。 |
| Landscape Right | デバイスはLandscapeモード、持ち方は横向きでホームボタンが左。 |
| Landscape Left | デバイスはLandscapeモード、持ち方は横向きでホームボタンが右。 |
| Use 32-bit Display Buffer | ディスプレイバッファが32ビットカラー値(デフォルトでは16ビット)を保持するために作成するかを指定。。使用するのは、バンディングが見られる場合や、ImageEffects(画像効果)でアルファを必要とする場合であり、理由はディスプレイバッファと同じ形式にRTを作成するためです。Gingerbread以前のOSではサポートされてません(強制的に16ビットになります)。 |
| Use 24-bit Depth Buffer | (少なくとも)24ビットカラー値を保持するためディスプレイバッファを作成するよう指定。パフォーマンスに影響を及ぼす可能性があるので、'z-fighting'やその他の画像の乱れがある場合のみ使用して下さい。 |
| Icon(アイコン) | |
|---|---|

プロジェクトをビルドしたとき保持するさまざまなアイコン。
| Override for Android | Androidゲームで使用したいカスタムアイコンを割り当てる場合、オンにします。異なるサイズのアイコンを、以下の箱の中に収めます。 |
| Splash Image(スプラッシュ画像) | |
|---|---|

プロジェクト起動時に表示されるスプラッシュ画像。
| Mobile Splash Screen (Unity Proのみ) | iOSのスプラッシュ画面に使用されるべきテクスチャを指定。標準のスプラッシュ画面サイズは320×480になります。(これは、AndroidとiOSの間で共有されます) |
| Splash Scaling | デバイス上のスプラッシュ画像の拡大・縮小の方法を指定します。 |
| Other Settings(他の設定) | |
|---|---|

| Rendering | |||
| Static Batching | ビルドでStatic Batchを使用する場合に設定(WebPlayerではデフォルトで無効)。Unity Proのみ。 | ||
| Dynamic Batching | ビルドでDynamic Batchを使用する場合に設定(デフォルトで有効) 。 | ||
| Identification | |||
| Bundle Identifier | お使いのApple Developer Networkのアカウントからプロビジョニング証明書で使用される文字列(これはiOSとAndroidの間で共有されます)。 | ||
| Bundle Version | バンドルのビルドバージョン番号を指定、ビルドバージョンが上がったことを示す(リリースされたかどうかにかかわらず)。単調に増加する、ピリオドで区切られた一つ以上の数字。(これは、iOSとAndroidの間で共有されます) | ||
| Bundle Version Code | 内部バージョン番号。この番号はひとつのバージョンが、別のバージョンより新しいかどうかを判断するための数字で、高いほうが新しいです。ユーザーに表示するバージョン番号ではなく、その番号はversionName属性によって設定されます。値は "100"のように、整数として設定しなければなりません。次のバージョンが、より高い数値であるかぎり好きに定義することができます。例えば、ビルド番号でも問題ありません。あるいは"x.y"の形式として、でバージョン番号または、下限と上限の16ビットで個別に "x"と "y"をエンコードし、整数に変換することができます。それとも、単に新しいバージョンをリリースするたびに、1つ数を増やすことができます。 | ||
| Minimum API Level | ビルドをサポートするのに最低限必要なAPIの最小バージョン。 | ||
| Configuration | |||
| Graphics Level | ES 1.1( "固定機能")またはES 2.0( 'シェーダベース')のOpen GLバージョンのいずれかを選択します。AVD(エミュレータ)を使用する場合はES 1.xのみサポートされています。 | ||
| Install Location | アプリケーションをデバイス上にインストールするロケーションを指定(詳細については、http://developer.android.com/guide/appendix/install-location.htmlを参照)。 | ||
| Automatic | OSで自動判断。ユーザーが後からアプリを相互に移動することができます。 | ||
| Prefer External | 可能であれば外部ストレージ(SDカード)にアプリをインストールします。OSでは保証されないため、出来ない場合はアプリは内部メモリにインストールされます。 | ||
| Force Internal | 強制的に内部メモリにアプリをインストールします。ユーザーはアプリを外部ストレージに移動することができません。 | ||
| Internet Access | Requireにすると、スクリプトがこれを使用していない場合でも、ネットワークのアクセス許可が有効になります。開発ビルドでは、自動的に有効化されます。 | ||
| Write Access | External (SDCard)に設定すると、SDカードなどの外部記憶装置への書き込みアクセスを可能にします。開発ビルドでは、自動的に有効化されます。 | ||
| Scripting Define Symbols | カスタム コンパイル フラグ(詳細はプラットフォーム依存のコンパイル を参照)。 | ||
| Optimization | |||
| Api Compatibility Level | アクティブな.NET API プロファイルを指定。 | ||
| .Net 2.0 | .Net 2.0ライブラリ。.Net互換性が最大、ファイルサイズ最大。 | ||
| .Net 2.0 Subset | .NET互換性は全体の一部、ファイルサイズは小さく。 | ||
| Stripping Level (Unity Proのみ) | ビルドされたプレーヤーのファイル容量を小さくするためスクリプト機能の一部を削減するオプション(この設定はiOSとAndroidプラットフォームの間で共有されます) | ||
| Disabled | 削減は行われません。 | ||
| Strip Assemblies | レベル1の削減。 | ||
| Strip ByteCode | レベル2の削減(レベル1からの削減を含む)。 | ||
| Use micro mscorlib | レベル3の削減(レベル1、2からの削減を含む)。 | ||
| Enable "logcat" profiler | プロジェクトのテスト時に、デバイスからフィードバックを取得したい場合はこれを有効にしてください。adbのlogcatを、デバイスからコンソール(開発ビルドでのみ使用可能)にログを出力します。 | ||
| Optimize Mesh Data | メッシュについて、それが適用されたマテリアルが必要としていないデータを全て取り除く(接線、法線、色、UV)。 | ||
| Publishing Settings(公開設定) |
|---|

Androidマーケット向けの公開設定
| Keystore | |
| Use Existing Keystore / Create New Keystore | 新しいキーストアを作成するか、既存のものを使用するか、選択するために使用します。 |
| Browse Keystore | 既存のキーストアを選択します。 |
| Keystore password | キーストアのパスワード。 |
| Confirm password | パスワード確認、Create New Keystoreオプションが選択された場合にのみ有効。 |
| Key | |
| Alias | キーのエイリアス。 |
| Password | キーエイリアスのパスワード。 |
| Split Application Binary | アプリケーションをExpansion Fileに分割するためのフラグ。Google Playストアで最終ビルドが50MB超えたときにかぎり便利です。 |
セキュリティ上の理由から、Unityがキーストアのパスワードも、キーのパスワードも、保存しないことに注意してください。また、署名はUnityのプレーヤーの設定から行う必要がありますので注意下さい、jarsignerを使用した場合は機能しません。
Flash
Resolution And Presentation(解像度とスクリーン)

| Resolution | |
| Default Screen Width | 生成されるPlayerのスクリーン幅。 |
| Default Screen Height | 生成されるPlayerのスクリーン高さ。 |
Other Settings

| Optimization | |
| Stripping | ビルドの際にバイトコードを削減するオプションです。 |
| Strip Physics Code | 必要でない場合に物理エンジンのコードを削減します。 |
| Optimize Mesh Data | メッシュについて、それが適用されたマテリアルが必要としていないデータを全て取り除く(接線、法線、色、UV) |
Google Native Client
Resolution and Presentation(解像度とスクリーン)

| Resolution | |
| Default Screen Width | 生成されるPlayerのスクリーン幅。 |
| Default Screen Height | 生成されるPlayerのスクリーン高さ。 |
Icon

プロジェクトをビルドしたとき保持するさまざまなアイコン。''
| Override for Web | ネイティブクライアントのゲームで使用したいカスタムアイコンを割り当てる場合、オンにします。異なるサイズのアイコンを、以下の箱の中に収めます。 | |
Other Settings

| Rendering | |
| Static Batching | ビルドでStatic Batchを使用する場合に設定(WebPlayerではデフォルトで無効))。Unity Proのみ。 |
| Dynamic Batching | ビルドでDynamic Batchを使用する場合に設定(デフォルトで有効) 。 |
| Configuration | |
| Scripting Define Symbols | カスタム コンパイル フラグ(詳細はplatform dependent compilationのページ を参照)。 |
| Optimization | |
| API Compatibility Level | |
| .Net 2.0 | .Net 2.0ライブラリ。.Net互換性が最大、ファイルサイズ最大。 |
| .Net 2.0 Subset | .NET互換性は全体の一部、ファイルサイズは小さく。 |
| Strip Physics Code | 必要でない場合に物理エンジンのコードを削減します。 |
| Optimize Mesh Data | メッシュについて、それが適用されたマテリアルが必要としていないデータを全て取り除く(接線、法線、色、UV)。 |
Details

Desktop
Player Settingsウィンドウで、多くの技術的な設定のデフォルト値がセットされています。Quality Settings も参照して、さまざまなグラフィックの品質レベルを設定できることを確認下さい。
WebPlayerの公開
Default Web Screen WidthおよびDefault Web Screen Heightで、htmlファイルが使用するサイズを決定します。後からhtmlファイルでサイズを変更できます。
Default Screen WidthおよびDefault Screen Height は、Web Player実行時にコンテキストメニューからフルスクリーンモードに入るときに、Web Playerで使用されます。
Resolution(解像度)ダイアログのカスタマイズ

エンドユーザに表示されるResolution(解像度)ダイアログ
スタンドアロン プレーヤーのScreen Resolution(画面解像度)ダイアログにカスタムバナー画像を追加するオプションがあります。画像の最大サイズは432×163ピクセルです。画像は、画面セレクタに合わせて拡大されません。代わりに、センタリングしてトリミングされます。
MacのApp Storeへの公開
Use Player Log(プレイヤーログの使用)によりデバッグ情報を含むログファイルの書き込みを有効にします。これはゲームに問題がある場合に、何が起こったかを調べるのに便利です。AppleのMac App Storeのゲームを公開するとき、これをオフにすることを推奨します、そうしなければAppleが提出を拒否することがあります。ログファイルの詳細についてはthis manual page を参照下さい。
Use Mac App Store ValidationでMacのApp StoreのReceipt Validationが有効になります。有効にした場合は、Mac App Storeから有効な領収書(Receipt)が含まれている場合のみゲームが実行されます。App Storeで公開するためにAppleにゲームを提出する際に、これを使用します。これにより、購入したもの以外のどれかのコンピュータ上でゲームが実行されることを防ぐことができます。この機能は、強力なコピー防止機能はまったく実装していないことに注意してください。特に、1つのUnityのゲームに対する潜在的なクラックは、他のUnityコンテンツに対しても有効となります。このような理由から、Unityのプラグイン機能を使って独自の領収書のValidationコードを実装し、この設定とあわせて使用することを推奨します。しかし、Appleで画面設定ダイアログを表示する前に初期的にプラグインによるValidationを行うことが必要であることから、このオプションをオンにするべきであり、そうしないとAppleが提出を拒否することがあります。

iOS
バンドル識別子
Bundle Identifierの文字列は、ビルドしたゲームのプロビジョニングプロファイルと一致する必要があります。識別子の基本的な構成はcom.CompanyName.GameNameです。この構成は、あなたが居住している国によって異なりますので、必ずあなたの開発者アカウントでアップルから提供された文字列をデフォルトとして下さい。GameNameは、AppleのiPhone Developer CenterのWebサイトから管理できるプロビジョニング証明書でセットアップされています。どう実行されるかの詳細についてはApple iPhone Developer Center website を参照下さい。
Stripping Level(Unity Proのみ)
ほとんどのゲームでは必要なすべてのDLLを使用しません。このオプションを使用すると、使用されない部分を削減して、iOSデバイス上のビルドしたプレーヤーのファイル容量を減らすことができます。もしこのオプションにより通常削減されるクラスがゲームで使用されている場合は、ビルド時にデバッグメッセージが表示されます。
スクリプト呼び出しの最適化
iOSで良い開発を実践するには、例外処理(内部的なもの、またはtry /catchブロックを使用したもの)に依存しないことです。デフォルトのSlow and Safe`オプションを使用する場合は、デバイス上で発生した例外がキャッチされ、スタックトレースが提供されます。Fast but no Exceptionsオプションを使用する場合、例外が発生するとゲームがクラッシュし、スタックトレースが提供されません。しかし、プロセッサが例外処理をする必要がないため、ゲームの実行速度が速くなります。ゲームを一般に向けてリリースするときはFast but no Exceptionsオプションを使用して公開することが最善です。

Android
バンドル識別子
Bundle Identifierの文字列は、Androidマーケットに公開され、デバイス上にインストールされたときのアプリケーションの一意の名前です。識別子の基本的な構成はcom.CompanyName.GameNameで、任意に選ぶことが出来ます。Unityではこのフィールドは、利便性のためにiOS Player Settingsと共有されています。
Stripping Level(Unity Proのみ)
ほとんどのゲームでは必要なすべてのDLLを使用しません。このオプションを使用すると、使用されない部分を削減して、Androidデバイス上のビルドしたプレーヤーのファイル容量を減らすことができます。
android-API
Unity Android provides a number of scripting APIs unified with iOS APIs to access handheld device functionality. For cross-platform projects, UNITY_ANDROID is defined for conditionally compiling Android-specific C# code. The following scripting classes contain Android-related changes (some of the API is shared between Android and iOS):
| Input | Access to multi-touch screen, accelerometer and device orientation. |
| iPhoneSettings | Some of the Android settings, such as screen orientation, dimming and information about device hardware. |
| iPhoneKeyboard | Support for native on-screen keyboard. |
| iPhoneUtils | Useful functions for movie playback, anti-piracy protection and vibration. |
Further Reading
Page last updated: 2010-09-09Android-Input

Desktop
Note: Keyboard, joystick and gamepad input work on the desktop versions of Unity (including webplayer and Flash) but not on mobiles.
Unity supports keyboard, joystick and gamepad input.
Virtual axes and buttons can be created in the Input Manager, and end users can configure Keyboard input in a nice screen configuration dialog.

You can setup joysticks, gamepads, keyboard, and mouse, then access them all through one simple scripting interface.
From scripts, all virtual axes are accessed by their name.
Every project has the following default input axes when it's created:
- Horizontal and Vertical are mapped to w, a, s, d and the arrow keys.
- Fire1, Fire2, Fire3 are mapped to Control, Option (Alt), and Command, respectively.
- Mouse X and Mouse Y are mapped to the delta of mouse movement.
- Window Shake X and Window Shake Y is mapped to the movement of the window.
Adding new Input Axes
If you want to add new virtual axes go to the menu. Here you can also change the settings of each axis.

You map each axis to two buttons on a joystick, mouse, or keyboard keys.
| Name | The name of the string used to check this axis from a script. |
| Descriptive Name | Positive value name displayed in the input tab of the dialog for standalone builds. |
| Descriptive Negative Name | Negative value name displayed in the Input tab of the dialog for standalone builds. |
| Negative Button | The button used to push the axis in the negative direction. |
| Positive Button | The button used to push the axis in the positive direction. |
| Alt Negative Button | Alternative button used to push the axis in the negative direction. |
| Alt Positive Button | Alternative button used to push the axis in the positive direction. |
| Gravity | Speed in units per second that the axis falls toward neutral when no buttons are pressed. |
| Dead | Size of the analog dead zone. All analog device values within this range result map to neutral. |
| Sensitivity | Speed in units per second that the the axis will move toward the target value. This is for digital devices only. |
| Snap | If enabled, the axis value will reset to zero when pressing a button of the opposite direction. |
| Invert | If enabled, the Negative Buttons provide a positive value, and vice-versa. |
| Type | The type of inputs that will control this axis. |
| Axis | The axis of a connected device that will control this axis. |
| Joy Num | The connected Joystick that will control this axis. |
Use these settings to fine tune the look and feel of input. They are all documented with tooltips in the Editor as well.
Using Input Axes from Scripts
You can query the current state from a script like this:
value = Input.GetAxis ("Horizontal");
An axis has a value between -1 and 1. The neutral position is 0. This is the case for joystick input and keyboard input.
However, Mouse Delta and Window Shake Delta are how much the mouse or window moved during the last frame. This means it can be larger than 1 or smaller than -1 when the user moves the mouse quickly.
It is possible to create multiple axes with the same name. When getting the input axis, the axis with the largest absolute value will be returned. This makes it possible to assign more than one input device to one axis name. For example, create one axis for keyboard input and one axis for joystick input with the same name. If the user is using the joystick, input will come from the joystick, otherwise input will come from the keyboard. This way you don't have to consider where the input comes from when writing scripts.
Button Names
To map a key to an axis, you have to enter the key's name in the Positive Button or Negative Button property in the Inspector.
The names of keys follow this convention:
- Normal keys: "a", "b", "c" ...
- Number keys: "1", "2", "3", ...
- Arrow keys: "up", "down", "left", "right"
- Keypad keys: "[1]", "[2]", "[3]", "[+]", "[equals]"
- Modifier keys: "right shift", "left shift", "right ctrl", "left ctrl", "right alt", "left alt", "right cmd", "left cmd"
- Mouse Buttons: "mouse 0", "mouse 1", "mouse 2", ...
- Joystick Buttons (from any joystick): "joystick button 0", "joystick button 1", "joystick button 2", ...
- Joystick Buttons (from a specific joystick): "joystick 1 button 0", "joystick 1 button 1", "joystick 2 button 0", ...
- Special keys: "backspace", "tab", "return", "escape", "space", "delete", "enter", "insert", "home", "end", "page up", "page down"
- Function keys: "f1", "f2", "f3", ...
The names used to identify the keys are the same in the scripting interface and the Inspector.
value = Input.GetKey ("a");
Mobile Input
On iOS and Android, the Input class offers access to touchscreen, accelerometer and geographical/location input.
Access to keyboard on mobile devices is provided via the iOS keyboard.
Multi-Touch Screen
The iPhone and iPod Touch devices are capable of tracking up to five fingers touching the screen simultaneously. You can retrieve the status of each finger touching the screen during the last frame by accessing the Input.touches property array.
Android devices don't have a unified limit on how many fingers they track. Instead, it varies from device to device and can be anything from two-touch on older devices to five fingers on some newer devices.
Each finger touch is represented by an Input.Touch data structure:
| fingerId | The unique index for a touch. |
| position | The screen position of the touch. |
| deltaPosition | The screen position change since the last frame. |
| deltaTime | Amount of time that has passed since the last state change. |
| tapCount | The iPhone/iPad screen is able to distinguish quick finger taps by the user. This counter will let you know how many times the user has tapped the screen without moving a finger to the sides. Android devices do not count number of taps, this field is always 1. |
| phase | Describes so called "phase" or the state of the touch. It can help you determine if the touch just began, if user moved the finger or if he just lifted the finger. |
Phase can be one of the following:
| Began | A finger just touched the screen. |
| Moved | A finger moved on the screen. |
| Stationary | A finger is touching the screen but hasn't moved since the last frame. |
| Ended | A finger was lifted from the screen. This is the final phase of a touch. |
| Canceled | The system cancelled tracking for the touch, as when (for example) the user puts the device to her face or more than five touches happened simultaneously. This is the final phase of a touch. |
Following is an example script which will shoot a ray whenever the user taps on the screen:
var particle : GameObject;
function Update () {
for (var touch : Touch in Input.touches) {
if (touch.phase == TouchPhase.Began) {
// Construct a ray from the current touch coordinates
var ray = Camera.main.ScreenPointToRay (touch.position);
if (Physics.Raycast (ray)) {
// Create a particle if hit
Instantiate (particle, transform.position, transform.rotation);
}
}
}
}
Mouse Simulation
On top of native touch support Unity iOS/Android provides a mouse simulation. You can use mouse functionality from the standard Input class.
Device Orientation
Unity iOS/Android allows you to get discrete description of the device physical orientation in three-dimensional space. Detecting a change in orientation can be useful if you want to create game behaviors depending on how the user is holding the device.
You can retrieve device orientation by accessing the Input.deviceOrientation property. Orientation can be one of the following:
| Unknown | The orientation of the device cannot be determined. For example when device is rotate diagonally. |
| Portrait | The device is in portrait mode, with the device held upright and the home button at the bottom. |
| PortraitUpsideDown | The device is in portrait mode but upside down, with the device held upright and the home button at the top. |
| LandscapeLeft | The device is in landscape mode, with the device held upright and the home button on the right side. |
| LandscapeRight | The device is in landscape mode, with the device held upright and the home button on the left side. |
| FaceUp | The device is held parallel to the ground with the screen facing upwards. |
| FaceDown | The device is held parallel to the ground with the screen facing downwards. |
Accelerometer
As the mobile device moves, a built-in accelerometer reports linear acceleration changes along the three primary axes in three-dimensional space. Acceleration along each axis is reported directly by the hardware as G-force values. A value of 1.0 represents a load of about +1g along a given axis while a value of -1.0 represents -1g. If you hold the device upright (with the home button at the bottom) in front of you, the X axis is positive along the right, the Y axis is positive directly up, and the Z axis is positive pointing toward you.
You can retrieve the accelerometer value by accessing the Input.acceleration property.
The following is an example script which will move an object using the accelerometer:
var speed = 10.0;
function Update () {
var dir : Vector3 = Vector3.zero;
// we assume that the device is held parallel to the ground
// and the Home button is in the right hand
// remap the device acceleration axis to game coordinates:
// 1) XY plane of the device is mapped onto XZ plane
// 2) rotated 90 degrees around Y axis
dir.x = -Input.acceleration.y;
dir.z = Input.acceleration.x;
// clamp acceleration vector to the unit sphere
if (dir.sqrMagnitude > 1)
dir.Normalize();
// Make it move 10 meters per second instead of 10 meters per frame...
dir *= Time.deltaTime;
// Move object
transform.Translate (dir * speed);
}
Low-Pass Filter
Accelerometer readings can be jerky and noisy. Applying low-pass filtering on the signal allows you to smooth it and get rid of high frequency noise.
The following script shows you how to apply low-pass filtering to accelerometer readings:
var AccelerometerUpdateInterval : float = 1.0 / 60.0;
var LowPassKernelWidthInSeconds : float = 1.0;
private var LowPassFilterFactor : float = AccelerometerUpdateInterval / LowPassKernelWidthInSeconds; // tweakable
private var lowPassValue : Vector3 = Vector3.zero;
function Start () {
lowPassValue = Input.acceleration;
}
function LowPassFilterAccelerometer() : Vector3 {
lowPassValue = Mathf.Lerp(lowPassValue, Input.acceleration, LowPassFilterFactor);
return lowPassValue;
}
The greater the value of LowPassKernelWidthInSeconds, the slower the filtered value will converge towards the current input sample (and vice versa). You should be able to use the LowPassFilter() function instead of avgSamples().
I'd like as much precision as possible when reading the accelerometer. What should I do?
Reading the Input.acceleration variable does not equal sampling the hardware. Put simply, Unity samples the hardware at a frequency of 60Hz and stores the result into the variable. In reality, things are a little bit more complicated -- accelerometer sampling doesn't occur at consistent time intervals, if under significant CPU loads. As a result, the system might report 2 samples during one frame, then 1 sample during the next frame.
You can access all measurements executed by accelerometer during the frame. The following code will illustrate a simple average of all the accelerometer events that were collected within the last frame:
var period : float = 0.0;
var acc : Vector3 = Vector3.zero;
for (var evnt : iPhoneAccelerationEvent in iPhoneInput.accelerationEvents) {
acc += evnt.acceleration * evnt.deltaTime;
period += evnt.deltaTime;
}
if (period > 0)
acc *= 1.0/period;
return acc;
Further Reading
The Unity mobile input API is originally based on Apple's API. It may help to learn more about the native API to better understand Unity's Input API. You can find the Apple input API documentation here:
- Programming Guide: Event Handling (Apple iPhone SDK documentation)
- UITouch Class Reference (Apple iOS SDK documentation)
Note: The above links reference your locally installed iPhone SDK Reference Documentation and will contain native ObjectiveC code. It is not necessary to understand these documents for using Unity on mobile devices, but may be helpful to some!

iOS
Device geographical location
Device geographical location can be obtained via the iPhoneInput.lastLocation property. Before calling this property you should start location service updates using iPhoneSettings.StartLocationServiceUpdates() and check the service status via iPhoneSettings.locationServiceStatus. See the scripting reference for details.
Android-Keyboard
In most cases, Unity will handle keyboard input automatically for GUI elements but it is also easy to show the keyboard on demand from a script.

iOS
Using the Keyboard
GUI Elements
The keyboard will appear automatically when a user taps on editable GUI elements. Currently, GUI.TextField, GUI.TextArea and GUI.PasswordField will display the keyboard; see the GUI class documentation for further details.
Manual Keyboard Handling
Use the iPhoneKeyboard.Open function to open the keyboard. Please see the iPhoneKeyboard scripting reference for the parameters that this function takes.
Keyboard Type Summary
The Keyboard supports the following types:
| iPhoneKeyboardType.Default | Letters. Can be switched to keyboard with numbers and punctuation. |
| iPhoneKeyboardType.ASCIICapable | Letters. Can be switched to keyboard with numbers and punctuation. |
| iPhoneKeyboardType.NumbersAndPunctuation | Numbers and punctuation. Can be switched to keyboard with letters. |
| iPhoneKeyboardType.URL | Letters with slash and .com buttons. Can be switched to keyboard with numbers and punctuation. |
| iPhoneKeyboardType.NumberPad | Only numbers from 0 to 9. |
| iPhoneKeyboardType.PhonePad | Keyboard used to enter phone numbers. |
| iPhoneKeyboardType.NamePhonePad | Letters. Can be switched to phone keyboard. |
| iPhoneKeyboardType.EmailAddress | Letters with @ sign. Can be switched to keyboard with numbers and punctuation. |
Text Preview
By default, an edit box will be created and placed on top of the keyboard after it appears. This works as preview of the text that user is typing, so the text is always visible for the user. However, you can disable text preview by setting iPhoneKeyboard.hideInput to true. Note that this works only for certain keyboard types and input modes. For example, it will not work for phone keypads and multi-line text input. In such cases, the edit box will always appear. iPhoneKeyboard.hideInput is a global variable and will affect all keyboards.
Keyboard Orientation
By default, the keyboard automatically follows the device orientation. To disable or enable rotation to a certain orientation, use the following properties available in iPhoneKeyboard:
| autorotateToPortrait | Enable or disable autorotation to portrait orientation (button at the bottom). |
| autorotateToPortraitUpsideDown | Enable or disable autorotation to portrait orientation (button at top). |
| autorotateToLandscapeLeft | Enable or disable autorotation to landscape left orientation (button on the right). |
| autorotateToLandscapeRight | Enable or disable autorotation to landscape right orientation (button on the left). |
Visibility and Keyboard Size
There are three keyboard properties in iPhoneKeyboard that determine keyboard visibility status and size on the screen.
| visible | Returns true if the keyboard is fully visible on the screen and can be used to enter characters. |
| area | Returns the position and dimensions of the keyboard. |
| active | Returns true if the keyboard is activated. This property is not static property. You must have a keyboard instance to use this property. |
Note that iPhoneKeyboard.area will return a rect with position and size set to 0 until the keyboard is fully visible on the screen. You should not query this value immediately after iPhoneKeyboard.Open. The sequence of keyboard events is as follows:
- iPhoneKeyboard.Open is called. iPhoneKeyboard.active returns true. iPhoneKeyboard.visible returns false. iPhoneKeyboard.area returns (0, 0, 0, 0).
- Keyboard slides out into the screen. All properties remain the same.
- Keyboard stops sliding. iPhoneKeyboard.active returns true. iPhoneKeyboard.visible returns true. iPhoneKeyboard.area returns real position and size of the keyboard.
Secure Text Input
It is possible to configure the keyboard to hide symbols when typing. This is useful when users are required to enter sensitive information (such as passwords). To manually open keyboard with secure text input enabled, use the following code:
iPhoneKeyboard.Open("", iPhoneKeyboardType.Default, false, false, true);

Hiding text while typing
Alert keyboard
To display the keyboard with a black semi-transparent background instead of the classic opaque, call iPhoneKeyboard.Open as follows:
iPhoneKeyboard.Open("", iPhoneKeyboardType.Default, false, false, true, true);

Classic keyboard

Alert keyboard

Android
Unity Android reuses the iOS API to display system keyboard. Even though Unity Android supports most of the functionality of its iPhone counterpart, there are two aspects which are not supported:
- iPhoneKeyboard.hideInput
- iPhoneKeyboard.area
Please also note that the layout of a iPhoneKeyboardType can differ somewhat between devices.
Android-Advanced

iOS
Advanced iOS scripting
Determining Device Generation
Different device generations support different functionality and have widely varying performance. You should query the device's generation and decide which functionality should be disabled to compensate for slower devices.
You can find the device generation from the iPhone.generation property. The reported generation can be one of the following:
- iPhone
- iPhone3G
- iPhone3GS
- iPhone4
- iPodTouch1Gen
- iPodTouch2Gen
- iPodTouch3Gen
- iPodTouch4Gen
- iPad1Gen
You can find more information about different device generations, performance and supported functionality in our iPhone Hardware Guide.
Device Properties
There are a number of device-specific properties that you can access:-
| SystemInfo.deviceUniqueIdentifier | Unique device identifier. |
| SystemInfo.deviceName | User specified name for device. |
| SystemInfo.deviceModel | Is it iPhone or iPod Touch? |
| SystemInfo.operatingSystem | Operating system name and version. |
Anti-Piracy Check
Pirates will often hack an application from the AppStore (by removing Apple DRM protection) and then redistribute it for free. Unity iOS comes with an anti-piracy check which allows you to determine if your application was altered after it was submitted to the AppStore.
You can check if your application is genuine (not-hacked) with the Application.genuine property. If this property returns false then you might notify the user that he is using a hacked application or maybe disable access to some functions of your application.
Note: accessing the Application.genuine property is a fairly expensive operation and so you shouldn't do it during frame updates or other time-critical code.
Vibration Support
You can trigger a vibration by calling Handheld.Vibrate. Note that iPod Touch devices lack vibration hardware and will just ignore this call.

Android
Advanced Android scripting
Determining Device Generation
Different Android devices support different functionality and have widely varying performance. You should target specific devices or device families and decide which functionality should be disabled to compensate for slower devices. There are a number of device specific properties that you can access to which device is being used.
Note: Android Marketplace does some additional compatibility filtering, so you should not be concerned if an ARMv7-only app optimised for OGLES2 is offered to some old slow devices.
Device Properties
| SystemInfo.deviceUniqueIdentifier | Unique device identifier. |
| SystemInfo.deviceName | User specified name for device. |
| SystemInfo.deviceModel | Is it iPhone or iPod Touch? |
| SystemInfo.operatingSystem | Operating system name and version. |
Anti-Piracy Check
Pirates will often hack an application (by removing Apple DRM protection) and then redistribute it for free. Unity Android comes with an anti-piracy check which allows you to determine if your application was altered after it was submitted to the AppStore.
You can check if your application is genuine (not-hacked) with the Application.genuine property. If this property returns false then you might notify user that he is using a hacked application or maybe disable access to some functions of your application.
Note: Application.genuineCheckAvailable should be used along with Application.genuine to verify that application integrity can actually be confirmed. Accessing the Application.genuine property is a fairly expensive operation and so you shouldn't do it during frame updates or other time-critical code.
Vibration Support
You can trigger a vibration by calling Handheld.Vibrate. However, devices lacking vibration hardware will just ignore this call.
Android-DotNet

iOS
Now Unity iOS supports two .NET API compatibility levels: .NET 2.0 and a subset of .NET 2.0 .You can select the appropriate level in the Player Settings.
.NET API 2.0
Unity supports the .NET 2.0 API profile. This is close to the full .NET 2.0 API and offers the best compatibility with pre-existing .NET code. However, the application's build size and startup time will be relatively poor.
Note: Unity iOS does not support namespaces in scripts. If you have a third party library supplied as source code then the best approach is to compile it to a DLL outside Unity and then drop the DLL file into your project's Assets folder.
.NET 2.0 Subset
Unity also supports the .NET 2.0 Subset API profile. This is close to the Mono "monotouch" profile, so many limitations of the "monotouch" profile also apply to Unity's .NET 2.0 Subset profile. More information on the limitations of the "monotouch" profile can be found here. The advantage of using this profile is reduced build size (and startup time) but this comes at the expense of compatibility with existing .NET code.

Android
Unity Android supports two .NET API compatibility levels: .NET 2.0 and a subset of .NET 2.0 You can select the appropriate level in the Player Settings.
.NET API 2.0
Unity supports the .NET 2.0 API profile; It is close to the full .NET 2.0 API and offers the best compatibility with pre-existing .NET code. However, the application's build size and startup time will be relatively poor.
Note: Unity Android does not support namespaces in scripts. If you have a third party library supplied as source code then the best approach is to compile it to a DLL outside Unity and then drop the DLL file into your project's Assets folder.
.NET 2.0 Subset
Unity also supports the .NET 2.0 Subset API profile. This is close to the Mono "monotouch" profile, so many limitations of the "monotouch" profile also apply to Unity's .NET 2.0 Subset profile. More information on the limitations of the "monotouch" profile can be found here. The advantage of using this profile is reduced build size (and startup time) but this comes at the expense of compatibility with existing .NET code.
Android-Plugins
This page describes Native Code Plugins for Android.
Building a Plugin for Android
To build a plugin for Android, you should first obtain the Android NDK and familiarize yourself with the steps involved in building a shared library.
If you are using C++ (.cpp) to implement the plugin you must ensure the functions are declared with C linkage to avoid name mangling issues.
extern "C" {
float FooPluginFunction ();
}
Using Your Plugin from C#
Once built, the shared library should be copied to the folder. Unity will then find it by name when you define a function like the following in the C# script:-
[DllImport ("PluginName")]
private static extern float FooPluginFunction ();
Please note that PluginName should not include the prefix ('lib') nor the extension ('.so') of the filename. It is advisable to wrap all native code methods with an additional C# code layer. This code should check Application.platform and call native methods only when the app is running on the actual device; dummy values can be returned from the C# code when running in the Editor. You can also use platform defines to control platform dependent code compilation.
Deployment
For cross platform deployment, your project should include plugins for each supported platform (ie, libPlugin.so for Android, Plugin.bundle for Mac and Plugin.dll for Windows). Unity automatically picks the right plugin for the target platform and includes it with the player.
Using Java Plugins
The Android plugin mechanism also allows Java to be used to enable interaction with the Android OS.
Building a Java Plugin for Android
There are several ways to create a Java plugin but the result in each case is that you end up with a .jar file containing the .class files for your plugin. One approach is to download the JDK, then compile your .java files from the command line with javac. This will create .class files which you can then package into a .jar with the jar command line tool. Another option is to use the Eclipse IDE together with the ADT.
Using Your Java Plugin from Native Code
Once you have built your Java plugin (.jar) you should copy it to the folder in the Unity project. Unity will package your .class files together with the rest of the Java code and then access the code using the Java Native Interface (JNI). JNI is used both when calling native code from Java and when interacting with Java (or the JavaVM) from native code.
To find your Java code from the native side you need access to the Java VM. Fortunately, that access can be obtained easily by adding a function like this to your C/C++ code:
jint JNI_OnLoad(JavaVM* vm, void* reserved) {
JNIEnv* jni_env = 0;
vm->AttachCurrentThread(&jni_env, 0);
}
This is all that is needed to start using Java from C/C++. It is beyond the scope of this document to explain JNI completely. However, using it usually involves finding the class definition, resolving the constructor (<init>) method and creating a new object instance, as shown in this example:-
jobject createJavaObject(JNIEnv* jni_env) {
jclass cls_JavaClass = jni_env->FindClass("com/your/java/Class"); // find class definition
jmethodID mid_JavaClass = jni_env->GetMethodID (cls_JavaClass, "<init>", "()V"); // find constructor method
jobject obj_JavaClass = jni_env->NewObject(cls_JavaClass, mid_JavaClass); // create object instance
return jni_env->NewGlobalRef(obj_JavaClass); // return object with a global reference
}
Using Your Java Plugin with helper classes
AndroidJNIHelper and AndroidJNI can be used to ease some of the pain with raw JNI.
AndroidJavaObject and AndroidJavaClass automate a lot of tasks and also use cacheing to make calls to Java faster. The combination of AndroidJavaObject and AndroidJavaClass builds on top of AndroidJNI and AndroidJNIHelper, but also has a lot of logic in its own right (to handle the automation). These classes also come in a 'static' version to access static members of Java classes.
You can choose whichever approach you prefer, be it raw JNI through AndroidJNI class methods, or AndroidJNIHelper together with AndroidJNI and eventually AndroidJavaObject/AndroidJavaClass for maximum automation and convenience.
UnityEngine.AndroidJNI is a wrapper for the JNI calls available in C (as described above). All methods in this class are static and have a 1:1 mapping to the Java Native Interface. UnityEngine.AndroidJNIHelper provides helper functionality used by the next level, but is exposed as public methods because they may be useful for some special cases.
Instances of UnityEngine.AndroidJavaObject and UnityEngine.AndroidJavaClass have a 1:1 mapping to an instance of java.lang.Object and java.lang.Class (or subclasses thereof) on the Java side, respectively. They essentially provide 3 types of interaction with the Java side:
- Call a method
- Get the value of a field
- Set the value of a field
The Call is separated into two categories: Call to a 'void' method, and Call to a method with non-void return type. A generic type is used to represent the return type of those methods which return a non-void type. The Get and Set always take a generic type representing the field type.
Example 1
//The comments describe what you would need to do if you were using raw JNI
AndroidJavaObject jo = new AndroidJavaObject("java.lang.String", "some_string");
// jni.FindClass("java.lang.String");
// jni.GetMethodID(classID, "<init>", "(Ljava/lang/String;)V");
// jni.NewStringUTF("some_string");
// jni.NewObject(classID, methodID, javaString);
int hash = jo.Call<int>("hashCode");
// jni.GetMethodID(classID, "hashCode", "()I");
// jni.CallIntMethod(objectID, methodID);
Here, we're creating an instance of java.lang.String, initialized with a string of our choice and retrieving the hash value for that string.
The AndroidJavaObject constructor takes at least one parameter, the name of class for which we want to construct an instance. Any parameters after the class name are for the constructor call on the object, in this case the string "some_string". The subsequent Call to hashCode() returns an 'int' which is why we use that as the generic type parameter to the Call method.
Note: You cannot instantiate a nested Java class using dotted notation. Inner classes must use the $ separator, and it should work in both dotted and slashed format. So android.view.ViewGroup$LayoutParams or android/view/ViewGroup$LayoutParams can be used, where a LayoutParams class is nested in a ViewGroup class.
Example 2
One of the plugin samples above shows how to get the cache directory for the current application. This is how you would do the same thing from C# without any plugins:-
AndroidJavaClass jc = new AndroidJavaClass("com.unity3d.player.UnityPlayer");
// jni.FindClass("com.unity3d.player.UnityPlayer");
AndroidJavaObject jo = jc.GetStatic<AndroidJavaObject>("currentActivity");
// jni.GetStaticFieldID(classID, "Ljava/lang/Object;");
// jni.GetStaticObjectField(classID, fieldID);
// jni.FindClass("java.lang.Object");
Debug.Log(jo.Call<AndroidJavaObject>("getCacheDir").Call<string>("getCanonicalPath"));
// jni.GetMethodID(classID, "getCacheDir", "()Ljava/io/File;"); // or any baseclass thereof!
// jni.CallObjectMethod(objectID, methodID);
// jni.FindClass("java.io.File");
// jni.GetMethodID(classID, "getCanonicalPath", "()Ljava/lang/String;");
// jni.CallObjectMethod(objectID, methodID);
// jni.GetStringUTFChars(javaString);
In this case, we start with AndroidJavaClass instead of AndroidJavaObject because we want to access a static member of com.unity3d.player.UnityPlayer rather than create a new object (an instance is created automatically by the Android UnityPlayer). Then we access the static field "currentActivity" but this time we use AndroidJavaObject as the generic parameter. This is because the actual field type (android.app.Activity) is a subclass of java.lang.Object, and any non-primitive type must be accessed as AndroidJavaObject. The exceptions to this rule are strings, which can be accessed directly even though they don't represent a primitive type in Java.
After that it is just a matter of traversing the Activity through getCacheDir() to get the File object representing the cache directory, and then calling getCanonicalPath() to get a string representation.
Of course, nowadays you don't need to do that to get the cache directory since Unity provides access to the application's cache and file directory with Application.temporaryCachePath and Application.persistentDataPath.
Example 3
Finally, here is a trick for passing data from Java to script code using UnitySendMessage.
using UnityEngine;
public class NewBehaviourScript : MonoBehaviour {
void Start () {
JNIHelper.debug = true;
using (JavaClass jc = new JavaClass("com.unity3d.player.UnityPlayer")) {
jc.CallStatic("UnitySendMessage", "Main Camera", "JavaMessage", "whoowhoo");
}
}
void JavaMessage(string message) {
Debug.Log("message from java: " + message);
}
}
The Java class com.unity3d.player.UnityPlayer now has a static method UnitySendMessage, equivalent to the iOS UnitySendMessage on the native side. It can be used in Java to pass data to script code.
Here though, we call it directly from script code, which essentially relays the message on the Java side. This then calls back to the native/Unity code to deliver the message to the object named "Main Camera". This object has a script attached which contains a method called "JavaMessage".
Best practice when using Java plugins with Unity
As this section is mainly aimed at people who don't have comprehensive JNI, Java and Android experience, we assume that the AndroidJavaObject/AndroidJavaClass approach has been used for interacting with Java code from Unity.
The first thing to note is that any operation you perform on an AndroidJavaObject or AndroidJavaClass is computationally expensive (as is the raw JNI approach). It is highly advisable to keep the number of transitions between managed and native/Java code to a minimum, for the sake of performance and also code clarity.
You could have a Java method to do all the actual work and then use AndroidJavaObject / AndroidJavaClass to communicate with that method and get the result. However, it is worth bearing in mind that the JNI helper classes try to cache as much data as possible to improve performance.
//The first time you call a Java function like
AndroidJavaObject jo = new AndroidJavaObject("java.lang.String", "some_string"); // somewhat expensive
int hash = jo.Call<int>("hashCode"); // first time - expensive
int hash = jo.Call<int>("hashCode"); // second time - not as expensive as we already know the java method and can call it directly
The Mono garbage collector should release all created instances of AndroidJavaObject and AndroidJavaClass after use, but it is advisable to keep them in a using(){} statement to ensure they are deleted as soon as possible. Without this, you cannot be sure when they will be destroyed. If you set AndroidJNIHelper.debug to true, you will see a record of the garbage collector's activity in the debug output.
//Getting the system language with the safe approach
void Start () {
using (AndroidJavaClass cls = new AndroidJavaClass("java.util.Locale")) {
using(AndroidJavaObject locale = cls.CallStatic<AndroidJavaObject>("getDefault")) {
Debug.Log("current lang = " + locale.Call<string>("getDisplayLanguage"));
}
}
}
You can also call the .Dispose() method directly to ensure there are no Java objects lingering. The actual C# object might live a bit longer, but will be garbage collected by mono eventually.
Extending the UnityPlayerActivity Java Code
With Unity Android it is possible to extend the standard UnityPlayerActivity class (the primary Java class for the Unity Player on Android, similar to AppController.mm on Unity iOS).
An application can override any and all of the basic interaction between Android OS and Unity Android. You can enable this by creating a new Activity which derives from UnityPlayerActivity (UnityPlayerActivity.java can be found at on Mac and usually at on Windows).
To do this, first locate the shipped with Unity Android. It is found in the installation folder (usually (on Windows) or (on Mac)) in a sub-folder called . Then add to the classpath used to compile the new Activity. The resulting .class file(s) should be compressed into a .jar file and placed in the folder. Since the manifest dictates which activity to launch it is also necessary to create a new AndroidManifest.xml. The AndroidManifest.xml file should also be placed in the folder.
The new activity could look like the following example, OverrideExample.java:
package com.company.product;
import com.unity3d.player.UnityPlayerActivity;
import android.os.Bundle;
import android.util.Log;
public class OverrideExample extends UnityPlayerActivity {
protected void onCreate(Bundle savedInstanceState) {
// call UnityPlayerActivity.onCreate()
super.onCreate(savedInstanceState);
// print debug message to logcat
Log.d("OverrideActivity", "onCreate called!");
}
public void onBackPressed()
{
// instead of calling UnityPlayerActivity.onBackPressed() we just ignore the back button event
// super.onBackPressed();
}
}
And this is what the corresponding AndroidManifest.xml would look like:
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.company.product">
<application android:icon="@drawable/app_icon" android:label="@string/app_name">
<activity android:name=".OverrideExample"
android:label="@string/app_name"
android:configChanges="fontScale|keyboard|keyboardHidden|locale|mnc|mcc|navigation|orientation|screenLayout|screenSize|smallestScreenSize|uiMode|touchscreen">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
UnityPlayerNativeActivity
It is also possible to create your own subclass of UnityPlayerNativeActivity. This will have much the same effect as subclassing UnityPlayerActivity but with improved input latency. Be aware, though, that NativeActivity was introduced in Gingerbread and does not work with older devices. Since touch/motion events are processed in native code, Java views would normally not see those events. There is, however, a forwarding mechanism in Unity which allows events to be propagated to the DalvikVM. To access this mechanism, you need to modify the manifest file as follows:-
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.company.product">
<application android:icon="@drawable/app_icon" android:label="@string/app_name">
<activity android:name=".OverrideExampleNative"
android:label="@string/app_name"
android:configChanges="fontScale|keyboard|keyboardHidden|locale|mnc|mcc|navigation|orientation|screenLayout|screenSize|smallestScreenSize|uiMode|touchscreen">
<meta-data android:name="android.app.lib_name" android:value="unity" />
<meta-data android:name="unityplayer.ForwardNativeEventsToDalvik" android:value="true" />
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Note the ".OverrideExampleNative" attribute in the activity element and the two additional meta-data elements. The first meta-data is an instruction to use the Unity library libunity.so. The second enables events to be passed on to your custom subclass of UnityPlayerNativeActivity.
Examples
Native Plugin Sample
A simple example of the use of a native code plugin can be found here
This sample demonstrates how C code can be invoked from a Unity Android application. The package includes a scene which displays the sum of two values as calculated by the native plugin. Please note that you will need the Android NDK to compile the plugin.
Java Plugin Sample
An example of the use of Java code can be found here
This sample demonstrates how Java code can be used to interact with the Android OS and how C++ creates a bridge between C# and Java. The scene in the package displays a button which when clicked fetches the application cache directory, as defined by the Android OS. Please note that you will need both the JDK and the Android NDK to compile the plugins.
Here is a similar example but based on a prebuilt JNI library to wrap the native code into C#.
Page last updated: 2012-09-25Android Splash Screen

iOS
Under iOS Basic, a default splash screen will be displayed while your game loads, oriented according to the Default Screen Orientation option in the Player Settings.
Users with an iOS Pro license can use any texture in the project as a splash screen. The size of the texture depends on the target device (320x480 pixels for 1-3rd gen devices, 1024x768 for iPad, 640x960 for 4th gen devices) and supplied textures will be scaled to fit if necessary. You can set the splash screen textures using the iOS Player Settings.

Android
Under Android Basic, a default splash screen will be displayed while your game loads, oriented according to the Default Screen Orientation option in the Player Settings.
Android Pro users can use any texture in the project as a splash screen. You can set the texture from the Splash Image section of the Android Player Settings. You should also select the Splash scaling method from the following options:-
- Center (only scale down) will draw your image at its natural size unless it is too large, in which case it will be scaled down to fit.
- Scale to fit (letter-boxed) will draw your image so that the longer dimension fits the screen size exactly. Empty space around the sides in the shorter dimension will be filled in black.
- Scale to fill (cropped) will scale your image so that the shorter dimension fits the screen size exactly. The image will be cropped in the longer dimension.
nacl-gettingstarted
Native Client (NaCl) is a new technology by Google which allows you to embed native executable code in web pages to allow deployment of very performant web apps without requiring the install of plugins. Currently, NaCl is only supported in Google Chrome on Windows, Mac OS X and Linux (with Chrome OS support being worked on), but the technology is open source, so it could be ported to other browser platforms in the future.
Unity 3.5 offers support to run Unity Web Player content (.unity3d files) using NaCl to allow content to be run without requiring a plugin install in Chrome. This is an early release - it should be stable to use, but it does not yet support all features supported in the Unity Web Player, because NaCl is an evolving platform, and does not support everything we can do in a browser plugin.
Building and Testing games on NaCl
Building and testing games on NaCl is very simple. You need to have Google Chrome installed. Simply choose "Web Player" in Build Settings, and tick the "Enable NaCl" checkbox. This will make sure the generated unity3d file can be run on NaCl (by including GLSL ES shaders needed for NaCl, and by disabling dynamic fonts not supported by NaCl), and install the NaCl runtime and a html file to launch the game in NaCl. If you click Build & Run, Unity will install your player as an app in Chrome and launch it automatically.
Shipping Games with NaCl
In its current state, NaCl is not enabled for generic web pages in Chrome by default. While you can embed a NaCl player into any web page, and direct your users to manually enable NaCl in chrome://flags, the only way to currently ship NaCl games and have them work out of the box is to deploy them on the Chrome Web Store (for which NaCl is enabled by default). Note that the Chrome Web Store is fairly unrestrictive, and allows you to host content embedded into your own web site, or to use your own payment processing system if you like. The plan is that this restriction will be lifted when Google has finished a new technology called portable NaCl (PNaCl), which lets you ship executables as LLVM bitcode, thus making NaCl apps independent of any specific CPU architectures. Then NaCl should be enabled for any arbitrary web site.
Notes on Build size
When you make a NaCl build, you will probably notice that the unity_nacl_files_3.x.x folder is very large, over 100 MB. If you are wondering, if all this much data needs to be downloaded on each run for NaCl content, the answer is generally "no". There are two ways to serve apps on the Chrome Web Store, as a hosted or packaged app. If you serve your content as a packaged app, all data will be downloaded on install as a compressed archive, which will then be stored on the user's disk. If you serve your content as a hosted app, data will be downloaded from the web each time. But the nacl runtime will only download the relevant architecture (i686 or x86_64) from the unity_nacl_files_3.x.x folder, and when the web server is configured correctly, the data will be compressed on transfer, so the actual amount of data to be transferred should be around 10 MB (less when physics stripping is used). The unity_nacl_files_3.x.x folder contains a .htaccess file to set up Apache to compress the data on transfer. If you are using a different web server, you may have to set this up yourself.
Limitations in NaCl
NaCl does not yet support all the features in the regular Unity Web Player. Support for many of these will be coming in future versions of Chrome and Unity. Currently, NaCl these features are unsupported by NaCl:
- Webcam Textures
- Joystick Input
- Caching
- Substances
- Dynamic Fonts
- Networking of any kind other then WWW class.
- The Profiler does not work, because it requires a network connection to the Editor.
- As with the standard webplayer plugin, native C/C++ plugins are not currently supported by NaCl.
The following features are supported, but have some limitations:
- Depth textures:
- Other graphics features:
- Cursor locking:
- NullReferenceExceptions:
softexceptions="1" to the embed parameters (set automatically by Unity when building a development player), to tell mono to do checking for NullReferences in software, which results in slower script execution but no crashes.
While Google does not give any system requirements for NaCl other then requiring at least OS X 10.6.7 on the Mac, we've found it to not work very well with old systems - especially when these systems have old GPUs or graphics drivers, or a low amount of installed main memory. If you need to target old hardware, you may find that the Web Player will give you a better experience.
Fullscreen mode:
Fullscreen mode is supported by setting Screen.fullScreen, but you can only enter fullscreen mode in a frame where the user has released the mouse button. NaCl will not actually change the hardware screen resolution, which is why Screen.resolutions will only ever return the current desktop resolution. However, Chrome supports rendering into smaller back buffers, and scaling those up when blitting to the screen. So, requesting smaller resolutions then the desktop resolution is generally supported for fullscreen mode, but will result in GPU based scaling, instead of changing the screen mode.
WWW class:
The WWW class is supported in NaCl, but follows different security policies then the Unity Web Player. While the Unity Web Player uses crossdomain.xml policy files, similar to flash, Unity NaCl has to follow the cross-origin security model followed by NaCl, documented here. Basically, in order to access html documents on a different domain then the player is hosted, you need to configure your web server to send a Access-Control-Allow-Origin respond header for the requests, which allows the domain hosting the player.
Communicating with browser javascript in NaCl
Interacting with the web page using JavaScript is supported, and is very similar to using the Unity Web Player, with one exception: The syntax for sending messages to Unity from html javascript is different, because it has to go through the NaCl module. When you are using the default Unity-generated html, then this code will work:
document.getElementById('UnityEmbed').postMessage("GameObject.Message(parameter)");
Logging
Since NaCl does not allow access to the user file system, it will not write log files. Instead it outputs all logging to stdout. To see the player logs from NaCl:
- Do a Build & Run in the edtior once to make sure your game is installed into Chrome as an app.
- On Mac OS X, start Chrome from a Terminal, and start the app by clicking on it's icon. You should see the Unity player log output in the terminal.
- On Windows it's the same, but you need to set the NACL_EXE_STDOUT and NACL_EXE_STDERR environment variables, and start Chrome with the --no-sandbox option. See Google's documentation.
flash-gettingstarted
What is Unity Flash?
The Flash build option allows Unity to publish swf (ShockWave Flash) files. These swf files can be played by a Flash plugin installed into your browser. Most computers in the world will either have a Flash Player installed, or can have one installed by visiting the Adobe Flash website. Just like a WebPlayer build creates a file with your 3d assets, audio, physics and scripts, Unity can build a SWF file. All the scripts from your game are automatically converted to ActionScript, which is the scripting language that the Flash Player works with.
Note that the Unity Flash build option exports SWF files for playback in your browser. The SWF is not intended for playback on mobile platforms.
Performance Comparison
We do not currently have direct comparisons of Unity webplayer content vs Flash SWF content. Much of our webplayer code is executed as native code, so for example, PhysX runs as native code. By comparison, when building a SWF file all of the physics runtime code (collision detection, newtonian physics) is converted to ActionScript. Typically you should expect the SWF version to run more slowly than the Unity webplayer version. We are, of course, doing everything we can to optimize for Flash.
Further reading:
- Flash: Setup
- Flash: Building & Running
- Flash: Debugging
- Flash: What is and is not supported
- Flash: Embedding Unity Generated Flash Content in Larger Flash Projects
- Flash: Adobe Premium Features License
- Example: Supplying Data from Flash to Unity
- Example: Calling ActionScript Functions from Unity
- Example: Browser JavaScript Communication
- Example: Accessing the Stage
Other Examples:
- Forums post - Loading Textures from Web (in AS3)
Useful Resources:
- Scripting Reference: ActionScript
- Flash Development section on the Unity forums
- Flash questions on Unity Answers
flash-setup
Installing Unity for Flash
To view the SWF files that Unity creates, your web browser will need Adobe Flash Player 11.2 or newer, which you can obtain from http://get.adobe.com/flashplayer/. If you have Flash Player already installed, please visit http://kb2.adobe.com/cps/155/tn_15507.html to check that you have at least version 11.2. Adobe Flash Player 11 introduced the Stage 3D Accelerated Graphics Rendering feature that Unity requires for 3d rendering.
For system requirements see http://www.adobe.com/products/flashplayer/tech-specs.html
Flash Player Switcher
This will allow you to switch between debug (slow) and regular (fast) versions of the Flash Player. Ensure you have Adobe AIR installed, or download it from http://get.adobe.com/air/. The Flash Player Switcher can be obtained from: https://github.com/jvanoostveen/Flash-Player-Switcher/downloads (select FlashPlayerSwitcher.air). Note: it currently supports only Mac OS X.
Other Adobe Tools/Platforms
No other Adobe tools or platforms are required to develop with Unity and create SWF files. To embed the SWF that Unity builds into your own Flash Application you will need one of Adobe FlashBuilder/PowerFlasher FDT/FlashDeveloper/etc and be an experienced Flash developer. You will need to know:
- Your embedding application needs to be set to -swf-version=15 / fp11.2
- Your flash embeds wmode needs to be set to direct
flash-building
The following is a step-by-step guide to build and run a new project exported to Flash.
- Create your Unity content.
- Choose File->Build Settings to bring up the Build Settings dialog and add your scene(s).
- Change the Platform to Flash Player
- Target Player can be left as the default. This option enables you to change the target Flash Player based on the features you require (see http://www.adobe.com/support/documentation/en/flashplayer/releasenotes.html for details).
- Tick Development Build. (This causes Unity to not compress the final SWF file. Not compressing will make the build faster, and also, the SWF file will not have to be decompressed before being run in the Flash Player. Note that an empty scene built using the Development Build option will be around 16M in size, compared to around 2M compressed.)
- Press the Build button.
Unity will build a SWF file at the location you choose. Additionally it will create the following files:
- an html file - Use this to view your Flash-built content.
- a swfobject.js file - Handles checking for the Flash Player and browser integration.
- an embeddingapi.swc file.
To view your Flash-built content open the html file. Do not open the SWF file directly.
Build-and-run will create the same files, launch your default browser and load the generated html file.
The embeddingapi.swc file created in the build allows you to load the SWF in your own project. Embedding the Unity content in a standard flash project allows you to do GUI in Flash. This type of Flash integration will of course not work in any of the other build targets.
As with the other build targets, there are Player settings that you can specify. Most of the Flash settings are shared with other platforms. Note that the resolution for the content is taken from the Standalone player settings.
We allow for a Flash API that gives you texture handles, which in combination with the swc embedding will give you means to do webcam, video, vector graphics from flash as textures.
The Build Process
The Unity Flash Publisher attempts to convert scripts from C#/UnityScript into ActionScript. In this process, there can be two kinds of conversion errors:
- errors during conversion of unity code to ActionScript
- errors while compiling the converted code.
Errors during conversion will point to the original files and will have the familiar UnityScript error messages with file names and line numbers.
Errors during the compilation of the converted ActionScript will take you to the message in the generated ActionScript code (with filenames ending with .as).
Debugging Converted ActionScript Code
During a build, the converted ActionScript (.as) files are stored within your project folder in:
- /Temp/StagingArea/Data/ConvertedDotNetCode/
If you encounter errors with your SWF (at runtime or during a build), it can be useful to look at this converted code.
It is possible that any ActionScript errors at compilation time will not be easily understood. Just remember that the ActionScript is generated from your game script code, so any changes you need to make will be in your original code and not the converted ActionScript files.
Building for a specific Flash Player version
The dropdown box in the build settings window will enable you to choose which Flash Player version you wish to target. This will always default to the lowest supported Flash Player version (currently 11.2) upon creating/reopening your Unity project.
If you wish to build for a specific Flash Player version you can do so by creating an editor script to perform the build for you. In order to do this, you can specify a FlashBuildSubtarget in your EditorUserBuildSettings when building to Flash from an editor script. For example:
EditorUserBuildSettings.flashBuildSubtarget = FlashBuildSubtarget.Flash11dot2; BuildPipeline.BuildPlayer(..., ..., BuildTarget.FlashPlayer, BuildOptions.Development);
Example Build Errors and Warnings
Below are some common errors/warnings you may encounter when using the Flash export. We also have sections on the Forums and Answers dedicated to Flash export which may be of help if your error is not listed below.
Unable to find Java
Error building Player: Exception: Compiling SWF Failed: Unable to launch Java - is the Java Runtime Environment (JRE) installed?
If you encounter the above error at build time, please install the 32-bit JRE and try again.
'TerrainCollider' is not supported
'TerrainCollider' is not supported when building for FlashPlayer. 'TerrainData' is not supported when building for FlashPlayer. Asset: 'Assets/New Terrain.asset'
The terrain feature is not supported when building for the FlashPlayer target. All un-supported features will generate a similar warning. Note that the build will continue, however, the unsupported feature will be missing from the final SWF.
Unboxing
Error: Call to a possibly undefined method RuntimeServices_UnboxSingle_Object through a reference with static type Class.
This is likely because the conversion between types that is defined on the UnityScript side is not defined for our Flash Publisher. Any time you see an error that refers to Unbox it means a type conversion is required but cannot be found. In order to resolve these issues:
- Do not forget to use
#pragma strict, and take care of all "implicit downcast" warning messages. - The rule of thumb is to avoid runtime casts from Object to primitive types (int, float, etc.). Also prefer containers with explicit types to generic ones, for example:
- System.Collections.Generic.List.<float> instead of Array
- Dictionary<string, float> instead of Hashtable
UnauthorizedAccessException
Error building Player: UnauthorizedAccessException: Access to the path "Temp/StagingArea/Data/ConvertedDotNetCode/global" is denied.
If Unity-generated ActionScript files are open in a text editor, Unity may refuse to build issuing this error. To fix this, please close the ActionScript files and allow Unity to overwrite them.
Page last updated: 2012-11-06flash-debugging
Where can I find my Flash Player log file?
Make sure you've done all of the following:
1) Install "content debugger" version of the Adobe Flash Player plugin from: http://www.adobe.com/support/flashplayer/downloads.html
2) Go to http://flashplayerversion.com/, and make sure that it says 'Debugger: Yes'
3) Be careful using Chrome as it ships with its own Flash Player. If you wish to use Chrome with the debug Flash Player, you can do so by following these instructions: http://helpx.adobe.com/flash-player/kb/flash-player-google-chrome.html
4) Create a file called mm.cfg which will instruct the Flash Player to create a logfile. The mm.cfg file needs to be placed here:
| Macintosh OS X | /Library/Application Support/Macromedia/mm.cfg |
| XP | C:\Documents and Settings\username\mm.cfg |
| Windows Vista/Win7 | C:\Users\username\mm.cfg |
| Linux | /home/username/mm.cfg |
Write this text in the mm.cfg file:
ErrorReportingEnable=1 TraceOutputFileEnable=1
5) Find and open your flashlog.txt here:
| Macintosh OS X | /Users/username/Library/Preferences/Macromedia/Flash Player/Logs/ |
| XP | C:\Documents and Settings\username\Application Data\Macromedia\Flash Player\Logs |
| Windows Vista/Win7 | C:\Users\username\AppData\Roaming\Macromedia\Flash Player\Logs |
| Linux | /home/username/.macromedia/Flash_Player/Logs/ |
Note that whilst your content is running this flashlog.txt will constantly be updated as new debug messages are generated by your script code. You may need to reload the file or use an editor that can reload as the file grows in size.
More details about enabling debug logs when using SWFs is available at: http://livedocs.adobe.com/flex/3/html/help.html?content=logging_04.html.
Page last updated: 2012-11-06flash-whatssupported
Supported
- Flash Player 11.2, 11.3 and 11.4
- Full ActionScript API Access
- Lightmapping
- Occlusion culling
- Editor Scripting (JavaScript / C# / Boo). Note: for JavaScript, use
#pragma strict. - Custom shaders
- Animation / skinning
- Basic types like int, string, List
- Basic audio features, such as AudioSource / AudioListener
- Physics
- Navigation Meshes
- Substance Textures, however the textures are baked at build time so cannot be dynamically changed at runtime
- PlayerPrefs - On Flash PlayerPrefs are stored per SWF per machine
- UnityGUI classes that do not require text input
- Particle System (Shuriken) works and is script accessible
- Asset bundles - These are supported but caching of bundles (i.e. use of LoadFromCacheOrDownload) is not currently supported
- WWW and WWWForm
- Mecanim
Limited support
- Realtime shadows work, but do get affected by bugs in image effects
- Untyped variables in JavaScript and implicit type conversions
- Unity GUI / Immediate mode GUI
- Any .NET specific stuff. Do not use stuff from exotic class libraries (reflection, LINQ etc).
- GUIText wil have a dramatic impact on performance
Not Currently Supported
- Image Effects
- Unity profiler
- UnityGUI classes that require text input
- Raknet networking (if you need networking, you can write it in Action Script 3 directly, using flash API)
- Cloth
- VertexLit shaders currently do not support Spot Lights (they are treated just like point lights).
- Advanced audio features, such as audio effects
- Terrain
- Texture mipMapBias
- Non-triangle MeshTopology and wireframe rendering
- AsyncOperation
Won't be supported
- Sockets - It is possible to use ActionScript sockets by implementing them in AS3.
- Deferred rendering
Texture Support
We support jpeg textures, as well as RGBA / Truecolor. Textures which are jpg-xr compressed are not readable and thus not supported.
The compression ratio can be specified in the texture import under 'Override for FlashPlayer' setting. Compressed textures get converted to jpeg with the chosen compression ratio. The compression ratio is worth experimenting with since it can considerably reduce the size of the final SWF.

Texture quality ranges from 0 to 100, with 100 indicating no compression, and 0 the highest amount of compression possible.
The maximum supported texture resolution is 2048x2048.
Unavailable APIs
- UnityEngine.AccelerationEvent
- UnityEngine.Achievement
- UnityEngine.AchievementDescription
- UnityEngine.GameCenter
- UnityEngine.GcLeaderboard
- UnityEngine.IDList
- UnityEngine.ISocial
- UnityEngine.Leaderboard
- UnityEngine.LocalServices
- UnityEngine.RectOffset
- UnityEngine.Score
- UnityEngine.Security
- UnityEngine.Serialization.ListSerializationSurrogate
- UnityEngine.Serialization.UnitySurrogateSelector
- UnityEngine.Social
- UnityEngine.StackTraceUtility
- UnityEngine.TextEditor
- UnityEngine.Types
- UnityEngine.UnityException
- UnityEngine.UnityLogWriter
- UnityEngine.UserProfile
flash-embeddingapi
embeddingapi.swc
If you want to embed your Unity generated Flash content within a larger Flash project, you can do so using the embeddingapi.swc. This SWC provides functionality to load and communicate with Unity published Flash content. In the embeddingapi.swc file, you will find two classes and two interfaces. Each of these, and their available functions, are described below.
When your Unity Flash project is built, a copy of the embeddingapi.swc file will be placed in the same location as your built SWF. You can then use this in your Flash projects as per other SWCs. For more details on what SWCs are and how to use them, see Adobe's documentation.
Stage3D Restrictions
When embedding your Unity Flash content within another Flash project, it is useful to understand the Flash display model. All Stage3D content is displayed behind the Flash Stage. This means that any Flash display list content added to the Stage will always render in front of your 3D content. For more information on this, please refer to Adobe's "How Stage3D Works" page.
IUnityContent
IUnityContent is implemented by Unity built Flash content. This interface is how you communicate with or modify the Untiy content.
Methods:
| getTextureFromNativeId(id : int) : TextureBase; | Enables retrieving of textures. A full example project using this can be found on the forums. |
| sendMessage(objectPath : String, methodName : String, value : Object = null) : Boolean; | The sendMessage function can be used to call a method on an object in the Unity content. |
| setContentHost(contentHost : IUnityContentHost) : void; | Sets the host (which must implement IUnityContentHost) for the Unity content. The host can then listen for when the Unity content has loaded/started. |
| setSize(width : int, height : int) : void; | Modifies the size of the Unity content |
| setPosition(x:int = 0, y:int = 0):void; | Enables you to reposition the Unity content within the content host. |
| startFrameLoop() : void; | Starts the Unity content. |
| stopFrameLoop() : void; | Stops the unity content. |
| forceUnload():void; | Unloads the Unity flash content. |
IUnityContentHost
This must be implemented by whichever class will host the Unity content.
Methods:
| unityInitComplete() : void; | Called when the Unity engine is done initializing and the first level is loaded. |
| unityInitStart() : void; | Called when the content is loaded and the initialization of the Unity engine is started. |
UnityContentLoader
The UnityContentLoader class can be used to load Unity published Flash content and extends the AS3 Loader class. As with standard AS3 Loader instances, you can add event listeners to its contentLoaderInfo in order to know the progress of the load and when it is complete.
Constructor:
UnityContentLoader(contentURL : String, contentHost : IUnityContentHost = null, params : UnityLoaderParams = null, autoLoad : Boolean = true) : void;
Creates a UnityContentLoader instance which you can attach event listeners to and use to load the unity content.
- contentURL: The URL of the Unity published SWF to load.
- contentHost: The host for the content. This should be your own ActionScript class that implements IUnityContentHost.
- params: Supply a UnityLoaderParams instance if you wish to override the default load details.
- autoLoad: If set to true, the load will begin as soon as the UnityContentLoader has been created (rather than needing to call loadUnity() separately). If you wish to track progress of the load using events, this should be set to false. You can then call loadUnity() manually once the relevant event listeners have been added.
Accessible Properties:
| unityContent : IUnityContent; | Once the content has finished loading, you can access the Unity content to perform functionality such as sendMessage(). |
Methods:
| loadUnity() : void; | Instructs the UnityContentLoader to load the Unity content from the URL supplied in the constructor. |
| forceUnload() : void; | Unloads the unity content from the host. |
| unload() : void; | Overrides the default unload() method of the AS3 Loader class and calls forceUnload. |
| unloadAndStop(gc:Boolean = true):void | Unloads the unity content then calls the default Loader implementation of unloadAndStop(gc). |
UnityLoaderParams
Constructor:
Parameters can be supplied to the UnityContentLoader when created to provide additional loader configuration.
function UnityLoaderParams(scaleToStage : Boolean = false, width : int = 640, height : int = 480, usePreloader : Boolean = false, autoInit : Boolean = true, catchGlobalErrors : Boolean = true) : void;
- scaleToStage: Whether the Unity content remains at a fixed size or whether it scales as the parent Flash window resizes.
- width: The width of the Unity content.
- height: The height of the Unity content.
- usePreloader: Whether or not to show the Unity preloader.
- autoInit: This is not currently used.
- catchGlobalErrors: Whether to catch errors and display them in a red box in the top left corner of the swf.
Example
The following example shows how to load Unity published Flash content into a host SWF. It shows how to supply custom UnityLoaderParams and track progress of the file load. Once the Unity content has been added to the host, a function in the Unity content is called using the sendMessage function.
public class MyLoader extends Sprite implements IUnityContentHost
{
private var unityContentLoader:UnityContentLoader;
public function MyLoader()
{
var params:UnityLoaderParams = new UnityLoaderParams(false,720,400,false);
unityContentLoader = new UnityContentLoader("UnityContent.swf", this, params, false);
unityContentLoader.contentLoaderInfo.addEventListener(ProgressEvent.PROGRESS, onUnityContentLoaderProgress);
unityContentLoader.contentLoaderInfo.addEventListener(Event.COMPLETE, onUnityContentLoaderComplete);
unityContentLoader.loadUnity();
}
private function onUnityContentLoaderProgress(event:ProgressEvent):void
{
//Respond to load progress
}
private function onUnityContentLoaderComplete(event:Event):void
{
addChild(unityContentLoader);
unityContentLoader.unityContent.setContentHost(this);
}
//unityInitStart has to be implemented by whatever implements IUnityContenthost
//This is called when the content is loaded and the initialization of the unity engine is started.
public function unityInitStart():void
{
//Unity engine started
}
//unityInitComplete has to be implemented by whatever implements IUnityContenthost
//This is called when the unity engine is done initializing and the first level is loaded.
public function unityInitComplete():void
{
unityContentLoader.unityContent.sendMessage("Main Camera","SetResponder",{responder:this});
}
...
}
Page last updated: 2012-11-06
flash-adobelicense
What is the license and why is it needed?
When publishing your Unity project to Flash, you will need to acquire a license from Adobe in order for the content to work in the Flash Player. The Adobe documentation of premium features explains why a license is required for Unity built Flash games:
"Premium Features includes the XC APIs (domain memory APIs in combination with Stage3D hardware acceleration APIs), which allows C/C++ developers and other developers using 3rd party tools, including Unity, to target Flash Player for the distribution of their games."
For more information and the latest details on the license, please refer to the Adobe article which explains this in detail.
How do I obtain a license?
To obtain a license, you will need to sign into https://www.adobefpl.com/ using your AdobeId and follow their instructions.
Further reading
Page last updated: 2012-11-06flashexamples-supplyingdata
If you wish to supply data from Flash to Unity, it must be one of the supported types. You can also create classes to represent the data (by providing a matching C# or JavaScript implementation).
First, create an AS3 implementation of your object and include the class in your project (in an folder called ActionScript):
public class ExampleObject
{
public var anInt : int;
public var someString : String;
public var aBool : Boolean;
}
Now create a C# or JavaScript object which matches the AS3 implementation.
The NotRenamed attribute used below prevents name mangling of constructors, methods, fields and properties.
The NotConverted attribute instructs the build pipeline not to convert a type or member to the target platform. Normally when you build to Flash, each of your C#/JavaScript scripts are converted to an ActionScript (.as) script. Adding the [NotConverted] attribute overrides this process, allowing you to provide your own version of the .as script, manually. The dummy C#/JavaScript which you provide allows Unity to know the signature of the class (i.e. which functions it should be allowed to call), and your .as script provides the implementations of those functions. Note that the ActionScript version will only be used when you build to Flash. In editor or when built to other platforms, Unity will use your C#/JavaScript version.
C#
[NotConverted]
[NotRenamed]
public class ExampleObject
{
[NotRenamed]
public int anInt;
[NotRenamed]
public string someString;
[NotRenamed]
public bool aBool;
}
JavaScript
@NotConverted
@NotRenamed
class ExampleObject
{
@NotRenamed
public var anInt : int;
@NotRenamed
public var someString : String;
@NotRenamed
public var aBool : boolean;
}
Now you need a way in AS3 to retrieve your object, e.g.:
public static function getExampleObject() : ExampleObject
{
return new ExampleObject();
}
Then you can then retrieve the object and access its data:
ExampleObject exampleObj = UnityEngine.Flash.ActionScript.Expression<ExampleObject>("MyStaticASClass.getExampleObject()");
Debug.Log(exampleObj.someString);
Page last updated: 2012-10-24
flashexamples-callingflashfunctions
This example shows how you can call different AS3 functions from Unity. You will encounter three scripts:
- An AS3 class (ExampleClass.as) containing different function examples. Any AS3 classes you create must be placed within an "ActionScript" folder in your project.
- A C#/JavaScript class (ExampleClass.cs/js) which mimics the AS3 implementation. You only need one of these.
- An example of how to call the functions from Unity.
When built to Flash, the AS3 implementation of ExampleClass is used. When run in-editor or built to any platform other than Flash the C#/JavaScript implementation will be used.
By creating an ActionScript version of your classes, this will enable you to use native AS3 libraries when building for Flash Player. This is particularly useful when you need to work around a .net library which isn't yet supported for Flash export.
ExampleClass.as
public class ExampleClass
{
public static function aStaticFunction() : void
{
trace("aStaticFunction - AS3 Implementation");
}
public static function aStaticFunctionWithParams(a : int) : void
{
trace("aStaticFunctionWithParams - AS3 Implementation");
}
public static function aStaticFunctionWithReturnType() : int
{
trace("aStaticFunctionWithReturnType - AS3 Implementation");
return 1;
}
public function aFunction() : void
{
trace("aFunction - AS3 Implementation");
}
}
ExampleClass - C#/JavaScript Implementation
You can create the class to mimic the AS3 implementation in either C# or JavaScript. The implementations are very similar. Both examples are provided below.
C# Implementation (ExampleClass.cs)
using UnityEngine;
[NotRenamed]
[NotConverted]
public class ExampleClass
{
[NotRenamed]
public static void aStaticFunction()
{
Debug.Log("aStaticFunction - C# Implementation");
}
[NotRenamed]
public static void aStaticFunctionWithParams(int a)
{
Debug.Log("aStaticFunctionWithParams - C# Implementation");
}
[NotRenamed]
public static int aStaticFunctionWithReturnType()
{
Debug.Log("aStaticFunctionWithReturnType - C# Implementation");
return 1;
}
[NotRenamed]
public void aFunction()
{
Debug.Log("aFunction - C# Implementation");
}
}
JavaScript Implementation (ExampleClass.js)
@NotConverted
@NotRenamed
class ExampleClass
{
@NotRenamed
static function aStaticFunction()
{
Debug.Log("aStaticFunction - JS Implementation");
}
@NotRenamed
static function aStaticFunctionWithParams(a : int)
{
Debug.Log("aStaticFunctionWithParams - JS Implementation");
}
@NotRenamed
static function aStaticFunctionWithReturnType() : int
{
Debug.Log("aStaticFunctionWithReturnType - JS Implementation");
return 1;
}
@NotRenamed
function aFunction()
{
Debug.Log("aFunction - JS Implementation");
}
}
How to Call the Functions
The below code will call the methods in the ActionScript (.as) implementation when building for Flash. This will allow you to use native AS3 libraries in your flash export projects. When building to a non-Flash platform or running in editor, the C#/JS implementation of the class will be used.
ExampleClass.aStaticFunction(); ExampleClass.aStaticFunctionWithParams(1); int returnedValue = ExampleClass.aStaticFunctionWithReturnType(); ExampleClass exampleClass = new ExampleClass(); exampleClass.aFunction();Page last updated: 2012-11-06
flashexamples-browserjavascriptcommunication
This example shows how AS3 code can communicate JavaScript in the browser. This example makes use of the ExternalInterface ActionScript class.
When run, the BrowserCommunicator.TestCommunication() function will register a callback that the browser JavaScript can then call. The ActionScript will then call out to the browser JavaScript, causing an alert popup to be displayed. The exposed ActionScript function will then be invoked by the JavaScript, completing the two-way communication test.
Required JavaScript
The following JavaScript needs to be added to the html page that serves the Unity published SWF. It creates the function which will be called from ActionScript:
<script type="text/javascript">
function calledFromActionScript()
{
alert("ActionScript called Javascript function")
var obj = swfobject.getObjectById("unityPlayer");
if (obj)
{
obj.callFromJavascript();
}
}
</script>
BrowserCommunicator.as (and matching C# class)
package
{
import flash.external.ExternalInterface;
import flash.system.Security;
public class BrowserCommunicator
{
//Exposed so that it can be called from the browser JavaScript.
public static function callFromJavascript() : void
{
trace("Javascript successfully called ActionScript function.");
}
//Sets up an ExternalInterface callback and calls a Javascript function.
public static function TestCommunication() : void
{
if (ExternalInterface.available)
{
try
{
ExternalInterface.addCallback("callFromJavascript", callFromJavascript);
}
catch (error:SecurityError)
{
trace("A SecurityError occurred: " + error.message);
}
catch (error:Error)
{
trace("An Error occurred: " + error.message);
}
ExternalInterface.call('calledFromActionScript');
}
else
{
trace("External interface not available");
}
}
}
}
C# dummy implementation of the class:
[NotConverted]
[NotRenamed]
public class BrowserCommunicator
{
[NotRenamed]
public static void TestCommunication()
{
}
}
How to test
Simply call BrowserCommunicator.TestCommunication() and this will invoke the two-way communication test.
Potential Issues
Security Sandbox Violation
A SecurityError occurred: Error #2060: Security sandbox violation
This happens when your published SWF does not have permission to access your html file. To fix this locally, you can either:
- Add the folder containing the SWF to the Flash Player's trusted locations in the Global Security Settings Panel.
- Host the file on localhost.
For more information on the Flash Security Sandboxes, please refer to the Adobe documentation.
Page last updated: 2012-10-24flashexamples-accessingthestage
You can access the Flash Stage from your C#/JS scripts in the following way:
ActionScript.Import("com.unity.UnityNative");
ActionScript.Statement("trace(UnityNative.stage);");
As an example, the following C# code will output the flashvars supplied to a SWF:
ActionScript.Import("flash.display.LoaderInfo");
ActionScript.Statement(
"var params:Object = LoaderInfo(UnityNative.stage.loaderInfo).parameters;" +
"var key:String;" +
"for (key in params) {" +
"trace(key + '=' + params[key]);" +
"}"
);
Page last updated: 2012-11-06
FAQ
以下は、Unity の共通のタスクおよびその実現方法の一覧です。
- Upgrade guide from 3.5 to 4.0
- Unity 3.5 upgrade guide
- 2.x から 3.x への Unity プロジェクトのアップグレード
- Unity 4.0 Activation - Overview
- ゲーム コードに関する質問
- グラフィックに関する質問
- FBX export guide
- Art Asset Best-Practice Guide
- 3D アプリケーションからオブジェクトをどのようにインポートしますか?
- ワークフローに関して
- Mobile Developer Checklist
Upgrade guide from 3.5 to 4.0
GameObject active state
Unity 4.0 changes how the active state of GameObjects is handled. GameObject's active state is now inherited by child GameObjects, so that any GameObject which is inactive will also cause its children to be inactive. We believe that the new behavior makes more sense than the old one, and should have always been this way. Also, the upcoming new GUI system heavily depends on the new 4.0 behavior, and would not be possible without it. Unfortunately, this may require some work to fix existing projects to work with the new Unity 4.0 behavior, and here is the change:
The old behavior:
- Whether a GameObject is active or not was defined by its .active property.
- This could be queried and set by checking the .active property.
- A GameObject's active state had no impact on the active state of child GameObjects. If you want to activate or deactivate a GameObject and all of its children, you needed to call GameObject.SetActiveRecursively.
- When using SetActiveRecursively on a GameObject, the previous active state of any child GameObject would be lost. When you deactivate and then activated a GameObject and all its children using SetActiveRecursively, any child which had been inactive before the call to SetActiveRecursively, would become active, and you had to manually keep track of the active state of children if you want to restore it to the way it was.
- Prefabs could not contain any active state, and were always active after prefab instantiation.
The new behavior:
- Whether a GameObject is active or not is defined by its own .activeSelf property, and that of all of its parents. The GameObject is active if its own .activeSelf property and that of all of its parents is true. If any of them are false, the GameObject is inactive.
- This can be queried using the .activeInHierarchy property.
- The .activeSelf state of a GameObject can be changed by calling GameObject.SetActive. When calling SetActive (false) on a previously active GameObject, this will deactivate the GameObject and all its children. When calling SetActive (true) on a previously inactive GameObject, this will activate the GameObject, if all its parents are active. Children will be activated when all their parents are active (i.e., when all their parents have .activeSelf set to true).
- This means that SetActiveRecursively is no longer needed, as active state is inherited from the parents. It also means that, when deactivating and activating part of a hierarchy by calling SetActive, the previous active state of any child GameObject will be preserved.
- Prefabs can contain active state, which is preserved on prefab instantiation.
Example:
You have three GameObjects, A, B and C, so that B and C are children of A.
- Deactivate C by calling C.SetActive(false).
- Now, A.activeInHierarchy == true, B.activeInHierarchy == true and C.activeInHierarchy == false.
- Likewise, A.activeSelf == true, B.activeSelf == true and C.activeSelf == false.
- Now we deactivate the parent A by calling A.SetActive(false).
- Now, A.activeInHierarchy == false, B.activeInHierarchy == false and C.activeInHierarchy == false.
- Likewise, A.activeSelf == false, B.activeSelf == true and C.activeSelf == false.
- Now we activate the parent A again by calling A.SetActive(true).
- Now, we are back to A.activeInHierarchy == true, B.activeInHierarchy == true and C.activeInHierarchy == false.
- Likewise, A.activeSelf == true, B.activeSelf == true and C.activeSelf == false.
The new active state in the editor
To visualize these changes, in the Unity 4.0 editor, any GameObject which is inactive (either because it's own .activeSelf property is set to false, or that of one of it's parents), will be greyed out in the hierarchy, and have a greyed out icon in the inspector. The GameObject's own .activeSelf property is reflected by it's active checkbox, which can be toggled regardless of parent state (but it will only activate the GameObject if all parents are active).
How this affects existing projects:
- To make you aware of places in your code where this might affect you, the GameObject.active property and the GameObject.SetActiveRecursively() function have been deprecated.
- They are, however still functional. Reading the value of GameObject.active is equivalent to reading GameObject.activeInHierarchy, and setting GameObject.active is equivalent to calling GameObject.SetActive(). Calling GameObject.SetActiveRecursively() is equivalent to calling GameObject.SetActive() on the GameObject and all of it's children.
- Exiting scenes from 3.5 are imported by setting the selfActive property of any GameObject in the scene to it's previous active property.
- As a result, any project imported from previous versions of Unity should still work as expected (with compiler warnings, though), as long as it does not rely on having active children of inactive GameObjects (which is no longer possible in Unity 4.0).
- If your project relies on having active children of inactive GameObjects, you need to change your logic to a model which works in Unity 4.0.
Changes to the asset processing pipeline
During the development of 4.0 our asset import pipeline has changed in some significant ways internal in order to improve performance, memory usage and determinism. For the most part these changes does not have an impact on the user with one exception: Objects in assets are not made persistent until the very end of the import pipeline and any previously imported version of an assets will be completely replaced.
The first part means that during post processing you cannot get the correct references to objects in the asset and the second part means that if you use the references to a previously imported version of the asset during post processing do store modification those modifications will be lost.
Example of references being lost because they are not persistent yet
Consider this small example:
public class ModelPostprocessor : AssetPostprocessor
{
public void OnPostprocessModel(GameObject go)
{
PrefabUtility.CreatePrefab("Prefabs/" + go.name, go);
}
}
In Unity 3.5 this would create a prefab with all the correct references to the meshes and so on because all the meshes would already have been made persistent, but since this is not the case in Unity 4.0 the same post processor will create a prefab where all the references to the meshes are gone, simply because Unity 4.0 does not yet know how to resolve the references to objects in the original model prefab. To correctly copy a modelprefab in to prefab you should use OnPostProcessAllAssets to go through all imported assets, find the modelprefab and create new prefabs as above.
Example of references to previously imported assets being discarded
The second example is a little more complex but is actually a use case we have seen in 3.5 that broke in 4.0. Here is a simple ScriptableObject with a references to a mesh.
public class Referencer : ScriptableObject
{
public Mesh myMesh;
}
We use this ScriptableObject to create an asset with references to a mesh inside a model, then in our post processor we take that reference and give it a different name, the end result being that when we have reimported the model the name of the mesh will be what the post processor determines.
public class Postprocess : AssetPostprocessor
{
public void OnPostprocessModel(GameObject go)
{
Referencer myRef = (Referencer)AssetDatabase.LoadAssetAtPath("Assets/MyRef.asset", typeof(Referencer));
myRef.myMesh.name = "AwesomeMesh";
}
}
This worked fine in Unity 3.5 but in Unity 4.0 the already imported model will be completely replaced, so changing the name of the mesh from a previous import will have no effect. The Solution here is to find the mesh by some other means and change its name. What is most important to note is that in Unity 4.0 you should ONLY modify the given input to the post processor and not rely on the previously imported version of the same asset.
Page last updated: 2012-10-29Upgrade guide from 3.4 to 3.5
If you have an FBX file with a root node marked up as a skeleton, it will be imported with an additional root node in 3.5, compared to 3.4

Unity 3.5 does this because when importing animated characters, the most common setup is to have one root node with all bones below and a skeleton next to it in the hierarchy. When creating additional animations, it is common to remove the skinned mesh from the fbx file. In that case the new import method ensures that the additional root node always exists and thus animations and the skinned mesh actually match.
If the connection between the instance and the FBX file's prefab has been broken in 3.4 the animation will not match in 3.5, and as a result your animation might not play.
In that case it is recommended that you recreate the prefabs or Game Object hierarchies by dragging your FBX file into your scene and recreating it.
Page last updated: 2012-02-04HowToUpgradeFrom2xTo3x
Unity の通常のポイント リリースでは、最初に新しいエディタで開いた際に、プロジェクトは、同じメジャーなバージョンの前のマイナーなバージョンから自動的にアップグレードされます。 新しいプロパティにはデフォルト値が与えられ、形式は変換されます。 しかし、2.x から 3.x などのメジャーなバージョン変更の場合、いくつか下位互換性を壊す変更が導入されます。
これは、前のバージョン出作成された内容は新しいエンジンで実行時に比べ、若干異なって再生される点で、主に確認できますが、一部の変更では、希望通りに再生するには、多くの微調整が必要になります。 これらの文書では、!2.x から 3.x への変更の概要を記載しています。
Page last updated: 2012-11-09PhysicsUpgradeDetails
Unity 3.0 向けに、NVIDIA PhysX ライブラリをバージョン 2.6 から 2.8.3 にアップグレードしており、多くの新しい機能を利用できます。 一般に、既存のプロジェクトの場合、動作はほぼ Unity 2.x と同じですが、物理特性シミュレーションの結果に僅かな違いがあるため、内容が正確な動作または物理特性イベントの連鎖に依存している場合、Unity 3.x で期待されるように、設定を最調整する必要がある場合があります。
Configurable Joints を使用している場合、JointDrive.mode が JointDriveMode.Position の場合、JointDrive.maximumForce プロパティも考慮に入れる必要がある。 この値をデフォルトの 0 に設定した場合、ジョイントは力を加えません。 JointDrive.mode が JointDriveMode.Position の場合、古いバージョンからインポートされた JointDrive プロパティをすべてを自動的に変更しますが、コードからジョイントを設定する時に、これを手動で変更する必要がある場合があります。 また、JointDrive.maximumForce のデフォルト値を無限に変更しています。
Page last updated: 2012-11-09MonoUpgradeDetails
In Unity 3 we upgraded the mono runtime from 1.2.5 to 2.6 and on top of that, there are some JavaScript and Boo improvements. Aside from all bug fixes and improvements to mono between the two versions, this page lists some of the highlights.
C# Improvements
Basically the differences betweeen C# 3.5 and C# 2.0, including:
JavaScript Improvements
- Compiler is now 4x faster;
- 'extends' no longer can be used with interfaces, unityscript now have 'implements' for that purpose (see below);
- Added support for consuming generic types such as generic collections:
var list = new System.Collections.Generic.List.<String>();
list.Add("foo");
- Added support for anonymous functions/closures:
list.Sort(function(x:String, y:String) {
return x.CompareTo(y);
});
- Which include a simplified lambda expression form with type inference for the parameters and return value:
list.Sort(function(x, y) x.CompareTo(y));
- Function types:
function forEach(items, action: function(Object)) {
for (var item in items) action(item);
}
- Type inferred javascript array comprehensions:
function printArray(a: int[]) {
print("[" + String.Join(", ", [i.ToString() for (i in a)]) + "]");
}
var doubles = [i*2 for (i in range(0, 3))];
var odds = [i for (i in range(0, 6)) if (i % 2 != 0)];
printArray(doubles);
printArray(odds);
- Added support for declaring and implementing interfaces:
interface IFoo {
function bar();
}
class Foo implements IFoo {
function bar() {
Console.WriteLine("Foo.bar");
}
}
- All functions are now implicitly virtual, as a result the 'virtual' keyword has been deprecated and the 'final' keyword has been introduced to allow for non virtual methods to be defined as:
final function foo() {
}
- Value types (structs) can be defined as classes inheriting from System.ValueType:
class Pair extends System.ValueType {
var First: Object;
var Second: Object;
function Pair(fst, snd) {
First = fst;
Second = snd;
}
override function ToString() {
return "Pair(" + First + ", " + Second + ")";
}
}
Boo Improvements
- Boo upgrade to version 0.9.4.
RenderingUpgradeDetails
Unity 3 brings a lot of graphics related changes, and some things might need to be tweaked when you upgrade existing Unity 2.x projects. For changes related to shaders, see Shader Upgrade Guide.
Forward Rendering Path changes
Unity 2.x had one rendering path, which is called Forward in Unity 3. Major changes in it compared to Unity 2.x:
- Most common case (one directional per-pixel light) is drawn in one pass now! (used to be two passes)
- Point & Spot light shadows are not supported. Only one Directional light can cast shadows. Use Deferred Lighting path if you need more shadows.
- Most "Vertex" lights replaced with Spherical Harmonics lighting.
- Forward rendering path is purely shader based now, so it works on OpenGL ES 2.0, Xbox 360, PS3 (i.e. platforms that don't support fixed function rendering).
Shader changes
See Shader Upgrade Guide for more details. Largest change is: if you want to write shaders that interact with lighting, you should use Surface Shaders.
Obscure Graphics Changes That No One Will Probably Notice TM
- Removed Mac Radeon 9200 pixel shader support (
!!ATIfsassembly shaders). - Removed support for per-pixel lighting on pre-ShaderModel2.0 hardware. As a result, Diffuse Fast shader is just VertexLit now.
- Removed non-attenuated lights. All point and spot lights are attenuated now.
- Removed script callbacks:
OnPreCullObjectandRenderBeforeQueuesattribute. - Removed p-buffer based RenderTextures. RenderTextures on OpenGL require FBO support now.
- Most Pass LightMode tags are gone, and replaced with new tags. You should generally be using Surface Shaders for that stuff anyway.
- Texture instanceIDs are not OpenGL texture names anymore. Might affect C++ Plugins that were relying on that; use
texture.GetNativeTextureID()instead. - Rename shader keywords SHADOWS_NATIVE to SHADOWS_DEPTH; SHADOWS_PCF4 to SHADOWS_SOFT.
- Removed ambient boost on objects that were affected by more than 8 vertex lights.
- Removed
_ObjectSpaceCameraPosand_ObjectSpaceLightPos0(added_WorldSpaceCameraPosand_WorldSpaceLightPos0). LightmapModetag in shader texture property does nothing now.- Skybox shaders do not write into depth buffer.
GrabPass(i.e. refractive glass shader) now always grabs texture of the size of the screen.#pragma multi_compile_vertexand#pragma multi_compile_fragmentare gone.- Polygon offset in ShaderLab can't reference variables anymore (like
Offset [_Var1], [_Var2]). - Renamed
TRANSFER_EYEDEPTH/OUTPUT_EYEDEPTHtoUNITY_TRANSFER_DEPTH/UNITY_OUTPUT_DEPTH. They also work on a float2 in Unity 3. - Removed special shader pass types: R2TPass, OffscreenPass.
- Removed
_Light2World0,_World2Light0built-in shader matrices. - Removed _SceneAmbient, _MultiModelAmbient, _MultiAmbient, _ModelAmbient, _MultiplyFog, _LightHackedDiffuse0, _ObjectCenterModelLightColor0 built-in shader vectors.
- Removed
_FirstPassbuilt-in shader float. - Fog mode in shader files can't come from variable (like
Fog { Mode [_MyFogMode] }). To use global fog mode, writeFog { Mode Global }. - Removed
BlendColorcolor from ShaderLab. - Removed support for declaring texture matrix by-value in shader property.
- Removed support for "static" shader properties.
- Removed support for texture border color (
RenderTexture.SetBorderColor). - Removed
ColorMaterial Ambient, Diffuse, Specularsupport (ColorMaterial AmbientAndDiffuse & Emission left). Support for the removed ones varied a lot depending on the platform causing confusion; and they didn't seem to be very useful anyway. - Built-in
_CameraToWorldand_WorldToCameramatrices now do what you'd expect them to do. Previously they only contained the rotation part, and camera-to-world was flipped on Y axis. Yeah, we don't know how that happened either :) - Removed
Shader.ClearAll(). Was deprecated since 2007, time to let it go. - Vertex shaders are compiled to Shader Model 2.0 now (before was 1.1). If you want to compile to SM1.1, add
#pragma target 1.1in the shader.
SL-V3Conversion
Unity 3 has many new features and changes to its rendering system, and ShaderLab did update accordingly. Some advanced shaders that were used in Unity 2.x, especially the ones that used per-pixel lighting, will need update for Unity 3. If you have trouble updating them - just ask for our help!
For general graphics related Unity 3 upgrade details, see Rendering Upgrade Details.
When you open your Unity 2.x project in Unity 3.x, it will automatically upgrade your shader files as much as possible. The document below lists all the changes that were made to shaders, and what to do when you need manual shader upgrade.
Per-pixel lit shaders
In Unity 2.x, writing shaders that were lit per-pixel was quite complicated. Those shaders would have multiple passes, with LightMode tags on each (usually PixelOrNone, Vertex and Pixel). With addition of Deferred Lighting in Unity 3.0 and changes in old forward rendering, we needed an easier, more robust and future proof way of writing shaders that interact with lighting. All old per-pixel lit shaders need to be rewritten to be Surface Shaders.
Cg shader changes
Built-in "glstate" variable renames
In Unity 2.x, accessing some built-in variables (like model*view*projection matrix) was possible through built-in Cg names like glstate.matrix.mvp. However, that does not work on some platforms, so in Unity 3.0 we renamed those built-in variables. All these replacements will be done automatically when upgrading your project:
glstate.matrix.mvpto UNITY_MATRIX_MVPglstate.matrix.modelview[0]to UNITY_MATRIX_MVglstate.matrix.projectionto UNITY_MATRIX_Pglstate.matrix.transpose.modelview[0]to UNITY_MATRIX_T_MVglstate.matrix.invtrans.modelview[0]to UNITY_MATRIX_IT_MVglstate.matrix.texture[0]to UNITY_MATRIX_TEXTURE0glstate.matrix.texture[1]to UNITY_MATRIX_TEXTURE1glstate.matrix.texture[2]to UNITY_MATRIX_TEXTURE2glstate.matrix.texture[3]to UNITY_MATRIX_TEXTURE3glstate.lightmodel.ambientto UNITY_LIGHTMODEL_AMBIENTglstate.matrix.textureto UNITY_MATRIX_TEXTURE
Semantics changes
Additionally, it is recommended to use SV_POSITION (instead of POSITION) semantic for position in vertex-to-fragment structures.
More strict error checking
Depending on platform, shaders might be compiled using a different compiler than Cg (e.g. HLSL on Windows) that has more strict error checking. Most common cases are:
- All vertex/fragment shader inputs and outputs need to have "semantics" assigned to them. Unity 2.x allowed to not assign any semantics (in which case some TEXCOORD would be used); in Unity 3.0 semantic is required.
- All shader output variables need to be written into. For example, if you have a
float4 color : COLORas your vertex shader output, you can't just write intorgband leave alpha uninitialized.
Other Changes
RECT textures are gone
In Unity 2.x, RenderTextures could be not power of two in size, so called "RECT" textures. They were designated by "RECT" texture type in shader properties and used as samplerRECT, texRECT and so on in Cg shaders. Texture coordinates for RECT textures were a special case in OpenGL: they were in pixels. In all other platforms, texture coordinates were just like for any other texture: they went from 0.0 to 1.0 over the texture.
In Unity 3.0 we have decided to remove this OpenGL special case, and treat non power of two RenderTextures the same everywhere. It is recommended to replace samplerRECT, texRECT and similar uses with regular sampler2D and tex2D. Also, if you were doing any special pixel adressing for OpenGL case, you need to remove that from your shader, i.e. just keep the non-OpenGL part (look for SHADER_API_D3D9 or SHADER_API_OPENGL macros in your shaders).
Unity 4.x Activation - Overview
What is the new Activation system?
With our new Licensing System, we allow you, the user, to manage your Unity license independently. Contacting the Support Team when you need to switch machine is a thing of the past! The system allows instant, automated migration of your machine, with a single click. Please read our 'Managing your Unity 4.0 License' link for more information.
http://docs.unity3d.com/Documentation/Manual/ManagingyourUnity4xLicense.html
If you're looking for step-by-step guides to Activation of Unity, please see the child pages.
FAQ
How many machines can I install my copy of Unity on?
Every paid commercial Unity license allows a *single* person to use Unity on *two* machines that they have exclusive use of. Be it a Mac and a PC or your Home and Work machines. Educational licenses sold via Unity or any one of our resellers are only good for a single activation. The same goes for Trial licenses, unless otherwise stated.
The free version of Unity may not be licensed by a commercial entity with annual gross revenues (based on fiscal year) in excess of US$100,000, or by an educational, non-profit or government entity with an annual budget of over US$100,000.
If you are a Legal Entity, you may not combine files developed with the free version of Unity with any files developed by you (or by any third party) through the use of Unity Pro. Please see our EULA http://unity3d.com/company/legal/eula for further information regarding license usage.
I need to use my license on another machine, but I get that message that my license has been 'Activated too many times'. What should I do?
Youll need to 'Return' your license. This enables you to return the license on the machine you no longer require, which in turn enables you to reactivate on a new machine. Please refer to the 'Managing your Unity 4.0 License' link at the top of the page, for more information.
My account credentials arent recognised when logging in during the Activation process?
Please ensure that your details are being entered correctly. Passwords ARE case sensitive, so ensure youre typing exactly as you registered. You can reset your password using the link below:
https://accounts.unity3d.com/password/new
If youre still having issues logging in, please contact 'support@unity3d.com'
Can I use Unity 4.x with my 3.x Serial number?
No, you cant. In order to use Unity 4.x, youll need to upgrade to a 4.x license. You can do this Online, via our Web Store. https://store.unity3d.com/shop/
Im planning on replacing an item of hardware and/or my OS. What should I do?
As with changing machine, youll need to 'Return' your license before making any hardware or OS changes to your machine. If you fail to Return the license, our server will see a request from another machine and inform you that youve reached your activation limit for the license. Please refer to the 'Managing your Unity 4.0 License' link at the top of the page, for more information regarding the return of a license.
My machine died without me being able to 'Return' my license, what now?
Please email 'support@unity3d.com' explaining your situation, including the details below.
- The Serial number you were using on the machine. - The (local network) name of the machine that died
The Support Team will then be able to 'Return' your license manually.
I have two licenses, each with an add-on I require, how do I activate them in unison on my machine?
You cant, unfortunately! A single license may only be used on one machine at any one time.
Where is my Unity 4.x license file stored?
- /Library/Application Support/Unity/Unity_v4.x.ulf (OS X)
- C:\ProgramData\Unity (Windows)
For any further assistance, please contact support@unity3d.com.
Managing your Unity 4.x License
With Unity 4.0 you are now able to manage your license independently (no more contacting Support for migration to your shiny new machine). Below is a guide to how this new system works and performs.
You will notice a new option under the 'Unity' drop-down on your toolbar that reads 'Manage License'. This is the unified place within the Editor for all your licensing needs.

Once you have clicked on the 'Manage License' option you will be faced with the 'License Management' window. You then have four options (see image), explained below:

'Check for updates' cross-references the server, querying your Serial number for any changes that may have been made since you last activated. This is handy for updating your license to include new add-ons once purchased and added to your existing license via the Unity Store.
'Activate a new license' does what it says on the tin. This enables you to activate a new Serial number on the machine youre using.
The 'Return license' feature enables you to return the license on the machine in question, in return for a new activation that can be used on another machine. Once clicked the Editor will close and you will be able to activate your Serial number elsewhere. For more information on how many machines a single license enables use on, please see our EULA: http://unity3d.com/company/legal/eula.
'Manual activation' enables you to activate your copy of Unity offline. This is covered in more depth here: http://docs.unity3d.com/Documentation/Manual/ManualActivationGuide.html.
For any further assistance, please contact support@unity3d.com.
Online Activation Guide
Online activation is the easiest and fastest way to get up and running with Unity. Below is a step-by-step guide on how to activate Unity online.
1. Download and install the Unity Editor. The latest version of Unity can be found at http://unity3d.com/unity/download/
2. Fire up the Editor from your Applications folder on OS X or the shortcut in the Start Menu on Windows.
3. You will be faced with a window titled 'Choose a version of Unity', you will then need to select the version of Unity you wish to activate by checking the tick box of the appropriate option and clicking 'OK' to proceed.

4. Next, you will encounter the 'Unity Account' window. Here you will need to enter your Unity Developer Network account credentials. (If you dont have an existing account or have forgotten your password, simply click the respective 'Create account' and 'Forgot your password?' button and links. Follow the onscreen prompts to create or retrieve your account.) Once your credentials are entered you can proceed by clicking 'OK'.

5. 'Thank you for your time' you will now be able to proceed to the Unity Editor by clicking the 'Start using Unity' button.

6. Youre all done!
For any further assistance, please contact support@unity3d.com.
Manual Activation Guide
With our new Licensing System, the Editor will automatically fall back to manual activation if Online Activation fails, or if you dont have an internet connection. Please see the steps below for an outline on how to manually Activate Unity 4.0.
1. As above, Unity will fall back to Manual Activation, should the Online Activation fail. However, you can manually prompt Unity to start the Manual Activation procedure by navigating to 'Unity>Manage License' within the Editor.

2. In the 'License Management' window, hit the 'Manual activation' button.

3. You should now be faced with a dialog displaying three buttons:

4. You will need to generate a license file; in order to do this, click the Save License button. Once clicked you will be faced with the window 'Save license information for offline activation'. Here you can select a directory on your machine to save the file.

5. Once saved, you will receive a message stating that 'License file saved successfully'. Click 'Ok' to proceed.

6. Now, youll need to minimise the Editor and navigate over to https://license.unity3d.com/manual within your Browser (if on a machine without an internet connection, youll need to copy the file to a machine that does and proceed there).
7. You now need to navigate to the file you generated in Step 4, uploading it in the appropriate field. When your file has been selected, click 'OK' to proceed. Attach:manualActivationwebpage.png Δ
8. Nearly done! You should have received a file in return, as with Step 4, save this to your machine in a directory of your choice.
9. Moving back into Unity, you can now select the 'Load License' button. Again, this will open up your directories within your hard drive. Now, select the file that you just saved via the Web form and click 'OK'.
10. Voila, you've just completed the Manual Activation process.
For any further assistance, please contact support@unity3d.com.
Game Code How-to
Page last updated: 2012-11-13HOWTO-First Person Walkthrough
自身の作品での簡単な一人称ウォークスルーの作成方法について説明します。
- レベルをインポートします。 Unity のアート パッケージからジオメトリをインポートする方法については、here を参照してください。
- インポートしたモデル ファイルを選択し、Inspector の Import Settings の「Generate Colliders」を有効にします。
- Project View で を特定して、Scene View にドラッグします。
- レベルのスケールが正しいか確認します。 一人称コントローラはちょうど 2 メートルの高さになりますが、レベルがコントローラのサイズに合わない場合、モデリング アプリケーション内でレベルのスケールを調整する必要があります。 スケールを正しく設定することは、物理的シミュレーションおよび this page の下部で文書化されているその他の理由のために重要です。 間違えたスケールを使用すると、オブジェクトが浮いていたり、重すぎたりするように感じます。 モデリング アプリケーションでスケールを変更できない場合、モデル ファイルの Import Settings... を変更できます。
- Transform ハンドルを使用して開始位置に一人称コントローラを移動します。 ゲーム開始時に、一人称コントローラはレベル ジオメトリと交差しないことが重要です (そうでない場合、行き詰まります)。
- 階層ビューでデフォルトのカメラ「Main Camera」を削除します。 一人称コントローラにはすでにそれ自身のカメラがあります。
- Play を押して、レベルで動き回ります。
Graphics how-tos
以下は、Unity の共通のグラフィック関連の質問およびその実現方法の一覧です。
色や突起、スペキュラ、反射マッピングなどのテクスチャのための素晴らしいチュートリアルは、ここにあります 。
- アルファ テクスチャをどのようにインポートしますか?
- 法線マップをどのように使用しますか?
- 詳細テクスチャをどのように使用しますか?
- キューブマップ テクスチャはどのように作成しますか?
- スカイボックスはどのように作成しますか?
- メッシュ粒子エミッタはどのように作成しますか? (旧パーティクルシステム)
- スプラッシュ画面はどのように作成しますか?
- スポット ライト Cookieはどのように作成しますか?
- インポートされたモデルの回転はどのように固定しますか?
- HOWTO-Water
HOWTO-alphamaps
Unity はストレートな alpha blending を使用します。 従って色レイヤーを拡張する必要があります。 Unity でのアルファ チャンネルが Photoshop ファイルの最初のアルファ チャンネルから読み込まれます。
設定
設定を行う前に、これらのアルファ ユーティリティ Photoshop アクションをインストールします。 AlphaUtility.atn.zip
インストール後には、アクション パレットに AlphaUtility と呼ばれるフォルダが含まれます。

アルファを正しく設定
Photoshop 内の透明なレイヤーにアルファ テクスチャがあるとします。 次のようになります。

- レイヤーをコピーします。
- 最も下のレイヤーを選択します。 これは、バックグラウンドの拡張の元になります。
- を選択し、デフォルトのプロパティを適用させます。
- Dilate Manyアクションを数回実行します。 これにより、背景が新しいレイヤーに展開されます。
- すべての拡張レイヤーを選択し、 で結合します。
- 画像スタックの下部に無色のレイヤーを作成します。 これは、文書の一般的な色に一致します (この場合、緑っぽい色)。注意:レイヤーを使わないと、Unityは全てのレイヤーをマージしたアルファを使います。
アルファ レイヤーに透明さをコピーする必要があります。
- 選択内容をレイヤー パレットでコマンド クリックすることで、メイン レイヤーの内容に設定します。
- チャンネル パレットに切り替えます。
- 透明さから新しいチェンネルを作成します。
PSD ファイルを保存します。これで準備ができました。
注意
もし(レイヤーをマージした後)透明を含む画像であれば、Unityは全レイヤーからマージされた透明部からアルファの方を使い、アルファマスクは無視します。 これの回避方法としては、「アルファ取得用」として Item 6 を無色レイヤーを作ることです。
Page last updated: 2012-11-13HOWTO-Normalmap
Normal maps は、凹凸のある表面の外観を作成するために、オブジェクト上の高さマップとして使用するグレー スケールがそうです。 次のようなモデルがあるとします。

「3D モデル」

「テクスチャ」
オブジェクトの明るい部分を隆起させたいとします。
- Photoshop でテクスチャのグレー スケール高さマップを描画します。 白が高い部分で、黒が低い部分です。 次のようになります。
- メイン テクスチャの隣の画像を保存します。
- Unity で、24 ビット RGB 形式を選択し、Inspector の Import Settings の「Generate Normal Map」を有効にします。
- モデルの Material Inspector で、シェーダ ドロップダウンから「Bumped Diffuse」を選択します。
- プロジェクト ウィンドウからのテクスチャを「Normalmap」テクスチャ スロットにドラッグします。
これでオブジェクトが法線マップに適用されました。

ヒント
- 突起をより目立たせるには、テクスチャ インポート設定で Bumpyness スライダを使用するか、Photoshop でテクスチャをぼやけさせます。 Experiment with both approaches to get a feel for it.
HOWTO-UseDetailTexture
Detail texture は、表面に近づくに連れ、フェードインする小さい、微細パターンで、木目や石の不完全さ、地形における地表の細かい部分などです。 このテクスチャは、明示的に Diffuse Detail shader と併用されます。
詳細テクスチャは、全方向にタイルを貼る必要があります。 0-127 の色値は、オブジェクトをより暗くし、128 は何も変わらず、より明るい色だと、オブジェクトをより明るくします。 画像を 128 辺りに中央に配置することが非常に重要です。そうでない場合、適用されるオブジェクトは、近づくに連れ、より明るくなるか暗くなります。
- 詳細テクスチャのグレースケール画像を描画または発見します。

「詳細テクスチャ」
「レベル」 - メイン テクスチャの隣の画像を保存します。
- Unity で、画像を選択し、「Generate Mip Maps」の下で、「Fades Out」を有効にし、スライダを Inspector の Import Settings にある類似の内容に設定します。
- 上のスライダは、フェードアウトし始める前にテクスチャの小ささを決定し、下のスライダは詳細テクスチャが完全に消える前の離れる距離を決定します。

s. - 右側の Material Inspector で、シェーダ ドロップダウンから を選択します。
- Project View からテクスチャを「Detail」テクスチャ スロットにドラッグします。
- 「Tiling」値を高い値に設定します。
HOWTO-MakeCubemap
Cubemaps は、Reflective built-in shaders で使用されます。 これを作成するには、6 つの 2D テクスチャを作成し、キューブマップ アセットを新規作成するか、1 つの正方形テクスチャからキューブマップを作成します。 詳細については、Cubemap Texture 文書ページを参照してください。
スタティックおよびダイナミック キューブマップの反射もスクリプトからレンダリングできます。 Camera.RenderToCubemap ページのコード例には、エディタから直接 キューブマップをレンダリングするための簡単なウィザード スクリプトが含まれています。
Page last updated: 2012-11-09HOWTO-UseSkybox
Skybox は、ゲーム内のすべてのグラフィックの裏で描画される6 面体のキューブです。 以下が作成方法になります。
- スカイボックスの 6 面のそれぞれに対応した 6 つのテクスチャを作成し、プロジェクトの Assets フォルダに入れます。
- 各テクスチャに対して、ラップ モードを「Repeat」 から「Clamp」に設定する必要があります。 これを行わない場合は、縁の色が調和しません。
- Material を新規作成するには、メニューバーから を選択します。
- Inspector の上部にあるシェーダ ドロップダウンを選択し、 を選択します。
- マテリアル内の各テクスチャ スロットに 6 つのテクスチャを割り当てます。 これを行うには、Project View から各テクスチャを対応するスロットにドラッグします。
スカイボックスを作業中のシーンに割り当てるには、
- メニューバーから を選択します。
- 新しいスカイボックス マテリアルをインスペクタの「Skybox Material」スロットにドラッグします。
Standard Assets パッケージには、すぐに使えるスカイボックスが幾つか含まれています。これが、最も速く始める方法です。
Page last updated: 2012-11-09HOWTO-MeshParticleEmitter
Mesh Particle Emitters は一般に、粒子を放出する場所に対する高いコントロールが必要な場合に使用されます。
例えば、炎の剣などを作成したい場合などです。
- メッシュをシーンにドラッグします。
- 「Mesh Renderer」の Inspector タイトル バーを右クリックして、「Mesh Renderer」を削除し、「Remove Component」を選択します。
- メニューから、Mesh Particle Emitter を選択します。
- メニューから、Particle Animator を選択します。
- メニューから、Particle Renderer を選択します。
メッシュから放出される粒子が確認できます。
Mesh Particle Emitter の値を試してみてください。
特に、メッシュ粒子エミッタのインスペクタで「Interpolate Triangles」を有効にし、「Min Normal Velocity」と「Max Normal Velocity」を 1 に設定します。
放出される粒子の見た目をカスタマイズするには、
- メニューバーから を選択します。
- マテリアル インスペクタで、シェーダ ドロップダウンから を選択します。
- Project View からテクスチャをマテリアル インスペクタのテクスチャ スロットにドラッグします。
- Scene View で、プロジェクト ビューからマテリアルを粒子システムにドラッグします。
メッシュから放出されるテクスチャ化粒子が確認できます。
以下も併せて参照してください。
Page last updated: 2012-11-13HOWTO-SplashScreen
デスクトップ!
Unity でのスプラッシュ画面またはその他の全画面画像の作成方法について記載します。 この方法は、複数の解像度およびアスペクト比に対して機能します。
- 最初に大きいテクスチャが必要になります。 サイズが 2 のべき乗のテクスチャが理想的です。 例えば、ほとんどの画面にフィットするように、1024x512 を使用してもよいでしょう。
- メニューバー項目の を使用して、箱を新規作成します。
- スケールの最初の 2 つの値として、16 と 9 を入力して、箱のサイズを 16:9 形式になるよう縮小拡大します。
- テクスチャを立方体にドラッグし、Camera がその立方体を向くようにします。 この立方体が 16:9 nアスペクト比で表示されるような距離にカメラを置きます。 Scene View メニューバーで Aspect Ratio Selector を使用して、最終結果を確認します。

iOS

Android
Page last updated: 2012-11-09HOWTO-LightCookie
Unity は、Standard Assets にいくつかの Light Cookies を同梱しています。 プロジェクトに標準アセットをプロジェクトにインポートする場合、 にあります。 このページには、自身で作成する方法が記載されています。
Cookie を使用してシーンに数々の視覚的詳細を追加する便利な方法は、ゲーム内のライティングの正確な見た目を制御するのにグレースケール テクスチャを使用する方法です。 これは、移動する雲の作成や、密集する葉の印象を与えるのに便利です。 Light Component Reference page にこれに関する詳細が全て記載されていますが、テクスチャを Cookie に対して使用可能にするには、次のプロジェクトを設定する必要があります。
スポット ライトにライト Cookie を作成するには、
- Photoshop でCookieテクスチャをペイントします。 画像はグレースケールにする必要があります。 白のピクセルは、完全な照明強度で、黒いピクセルは照明なしになります。 テクスチャの境界は完全に黒にする必要があります。そうでない場合、スポット ライトの外部から光が漏れているような見かけになります。
- Texture Inspector で、「Repeat」 ラップ モードを「Clamp」に設定する必要があります。
テクスチャを選択し、Inspector で次の Import Settings を編集します。
- 「Border Mipmaps」を有効にします。
- 「Build Alpha From Grayscale」を有効にします (このように、グレースケール Cookie を作成でき、Unity がこの Cookie をアルファ マップに自動的に変換します)
- テクスチャ形式を「Alpha 8 Bit」に設定します。

HOWTO-FixZAxisIsUp
3D アート パッケージの中には、Z 軸が上を向くように、モデルをエクスポートします。 Unity の標準スクリプトは、Y 軸は 3D ワールドで up を表すことを前提としています。 通常、Unity での回転の固定は、ものがフィットするようにスクリプトを修正するより簡単に行えます。

Z 軸のあるモデルは上を指します
可能な限り、エクスポート前に、Y 軸に上を向かせるための 3D モデリング アプリケーションでモデルを固定することをお勧めします。
不可能な場合は、さらに親トランスフォームを追加することで、Unity でモデルを固定できます。
- メニューを使用して、空の GameObject を作成します。
- メッシュの中心にくるよう、またはオブジェクトを回転させたい点に関係なく、新しい GameObject を配置します。
- 空の GameObject にメッシュをドラッグします。
これで、メッシュを正しい方向を向いている空の GameObject の Child にすることができました。 Y 軸を上として使用するスクリプトをいつ記述しても、そのスクリプトを Parent である空の GameObject に追加します。

追加の空のトランスフォームを持つモデル
Page last updated: 2012-11-09HOWTO-Water
ウォータをどのように使用しますか?
注意: 本ページの内容は、デスクトップ エディタ モードにのみ適用されます。
Unity には、Standard Assets and Pro Standard Assets packages 内に、いくつかのウォータ プレハブ (必要なシェーダ、スクリプトおよびアート アセットを含む) を含んでます。 Unity には、基本的なウォータを含んでいますが、Unity Pro は、リアルタイムの反射や屈折を持つウォータを含んでいます。いずれも、個々のデイライトおよびナイトタイム ウォータ プレハブとして提供されています。

「反射するデイライト ウォータ (Unity Pro)」

「反射/屈折するデイライト ウォータ (Unity Pro)」
ウォータ設定
大半の場合、既存のプレハブの 1 つをシーンに置く必要があります (make sure to have the Standard Assets をインストールすること)。
- Unity は、 内に Daylight Simple Water と Nighttime Simple Water を持っています。
- Unity Pro は、 内に Daylight Water と Nighttime Water を持っています (しかし、 からのアセットも一部必要です)。 ウォータ モード (シンプル、反射、屈折) をインスペクタで設定できます。
プレハブは、ウォータに楕円状のメッシュを使用します。 別の Mesh を使用したい場合は、メッシュをウォータ オブジェクトの「Mesh Filter」に変更するだけで簡単に使用できます。

ゼロからのウォータの作成 (詳細)
Unity におけるシンプルなウォータは、スクリプトの平面上メッシュへの追加とウォータ シェーダの使用が必要です。
- ウォータにメッシュを設定します。 これは、水平に配置された、平面メッシュになります。 UV 座標は不要です。 ウォータ GameObject は、Inspector で設定できる、ウォータの「layer」を使用します。
- 「WaterSimple」スクリプトを ( から) オブジェクトに追加します。
- シェーダをマテリアルで使用するか、提供されたウォータ マテリアルを微調整します ( または )。
ゼロから設定するため、Unity Pro の反射/屈折ウォータには、同様のステップが必要です。
- ウォータにメッシュを設定します。 これは、水平に配置された、平面メッシュになります。 UV 座標は不要です。 ウォータ GameObject は、Inspector で設定できる、ウォータの「layer」を使用します。
- 「Water」スクリプトを ( から) オブジェクトに追加します。
- 次のウォータ レンダリング モードを、インスペクタで設定できます。 シンプル、反射、屈折。
- シェーダをマテリアルで使用するか、提供されたウォータ マテリアルを微調整します ( または )。
ウォータ マテリアルのプロパティ
これらのプロパティは、反射および屈折ウォータ シェーダで使用します。 その大半は、シンプル ウォータ シェーダでも使用されます。
| Wave scale | ウォータの法線マップのスケーリング。 この値が小さいほど、ウォータは大きく揺れます。 |
| Reflection/refraction distort | 波の法線マップで歪ませる反射および屈折の程度。 |
| Refraction color | 屈折に対する追加の色合い。 |
| Environment reflection/refraction | リアルタイムの反射および屈折に対するテクスチャをレンダリングします。 |
| Normalmap | 波の形状を定義します。 最後の波は、これら 2 つの法線マップを結合することで生成され、それぞれが、異なる方向、スケール、速度でスクロールします。 2 つ目の法線マップは、1 つ目の法線マップの半分の大きさです。 |
| Wave speed | 1 つ目の法線マップ (1 番、2 番) と 2 つ目の法線マップの (3 番、4 番) のスクロール速度 |
| Fresnel | フレネル効果をコントロールするアルファ チャンネルのあるテクスチャ。表示角度に基づく、反射と屈折の表示量。 |
残りのプロパティは、反射および屈折シェーダ自体によって使用されませんが、ユーザーのビデオ カードがサポートしておらず、よりシンプルなシェーダにフォールバックする必要がある場合は設定する必要があります。
| Reflective color/cube and fresnel | 表示角度に基づいた、ウォータの色 (RGB) およびフレネル効果 (A) を定義するテクスチャ。 |
| Horizon color | 水平線でのウォータの色。 (シンプル ウォータ シェーダでのみ使用されます)。 |
| Fallback texture | もっと見た目のよいシェーダを実行できない場合に、非常に古いビデオ カードでウォータを表すテクスチャ。 |
ハードウェア サポート
- 反射および屈折ウォータは、ピクセル シェーダ 2.0 をサポートしているグラフィック カードで作動します (GeForce FX 以降、 Radeon 9500 以降、Intel 9xx)。 古いカードでは、反射ウォータが使用されます。
- 反射ウォータは、ピクセル シェーダ 1.4 をサポートしているグラフィック カードで作動します (GeForce FX 以降、 Radeon 8500 以降、Intel 9xx)。 古いカードでは、シンプル ウォータが使用されます。
- シンプル ウォータは、どのカードでも作動しますが、ハードウェアの機能に応じて、様々なレベルの詳細があります。
HOWTO-exportFBX
Unity supports FBX files which can be generated from many popular 3D applications. Use these guidelines to help ensure the most best results.
Select > Prepare > Check Settings > Export > Verify > Import
What do you want to export? - be aware of export scope e.g. meshes, cameras, lights, animation rigs, etc. -
- Applications often let you export selected objects or a whole scene
- Make sure you are exporting only the objects you want to use from your scene by either exporting selected, or removing unwanted data from your scene.
- Good working practice often means keeping a working file with all lights, guides, control rigs etc. but only export the data you need with export selected, an export preset or even a custom scene exporter.
What do you need to include? - prepare your assets:
- Meshes - Remove construction history, Nurbs, Nurms, Subdiv surfaces must be converted to polygons - e.g. triangulate or quadrangulate
- Animation - Select the correct rig, check frame rate, animation length etc.
- Textures - Make sure your textures are sourced already from your Unity project or copied into a folder called \textures in your project
- Smoothing - Check if you want smoothing groups and/or smooth mesh
How do I include those elements? - check the FBX export settings
- Be aware of your settings in the export dialogue so that you know what to expect and can match up the fbx settings In Unity - see figs 1, 2 & 3 below
- Nodes, markers and their transforms can be exporte
- Cameras and Lights are not currently imported in to Unity
Which version of FBX are you using? if in doubt use 2012.2
- Autodesk update their FBX installer regularly and it can provide different results with different versions of their own software and other 3rd party 3D apps.
- See Advanced Options > FBX file format
Will it work? - Verify your export
- Check your file size - do a sanity check on the file size (e.g. >10kb?)
- Re-import your FBX into a new scene in the 3D package you use to generate it - is it what you expected?
Import!
- Import into Unity
- Check FBX import settings in inspector : texures, animations, smoothing, etc.
See below for Maya FBX dialogue example:
Fig 1 General, Geometry & Animation

Fig 2 Lights, Advanced options

HOWTO-ArtAssetBestPracticeGuide
Unity supports textured 3D models from a variety of programs or sources. This short guide has been put together by games artists with developers at Unity, to help you create assets that work better and more efficiently in your Unity project.
Scale & Units
- Set your system and project units for your software to work consistently with Unity e.g. Metric.
- Working to scale can be important for both lighting and physics simulation;
- Be aware for example; Max system unit default is inches and Maya is centimetres.
- Unity has different scaling for FBX and 3D application files on import, check FBX import scale setting in Inspector.
- If in doubt export a metre cube with your scene to match in Unity.
- Animation frame rate defaults can be different in packages, is a good idea to set consistently across your pipeline e.g. 30fps for example.
Files & Objects
- Name objects in your scene sensibly and uniquely this can help you locate and troubleshoot specific meshes in your project:
- Avoid special characters
*()?$netc. - Use simple but descriptive names for both objects and files (allow for duplication later).
- Keep your hierarchies as simple as you can.
- With big projects in your 3D application, consider having a working file outside your Unity project directory this can often avert time consuming updates and importing unnecessary data.

Sensibly named objects help you find stuff quickly
Mesh
- Build with an efficient topology use polygons only where you need them.
- Optimise your geometry if it has too many polygons many character models will need to be intelligently optimised or even rebuilt by an artist esp if sourced/built from:
- 3D capture data
- Poser
- Zbrush
- Other hi density Nurbs/Patch models designed for render
- Evenly spaced polygons in buildings, landscape and architecture where you can afford them, will help spread lighting and avoid awkward kinks.
- Avoid really long thin triangles

Stairway to framerate heaven
The method you use to construct objects can have a massive affect on the number of polygons, especially when not optimised. Observe the same shape mesh : 156 triangles (right) vs 726 (left). 726 may not sound like a great deal of polygons, but if this is used 40 times in a level, you will really start to see the savings. A good rule of thumb is often to start simple and add detail where needed. Its always easier to add polygon than take them away.
Textures
Textures are more efficient and dont need rescaling at build time if authored to specific texture sizes e.g a power of two up to 40964096 pixels, e.g. 512512 or 2561024 etc. (20482048 is the highest on many graphics cards/platforms) there is lots of expertise online for creating good textures, but some of these guidelines can help you get the most efficient results from your project:
- Working with a Hi-res source file outside your unity project can be good working practice (such as a PSD or Gimp file you can always downsize from source but not the other way round).
- Use the texture resolution output you require in your scene (e.g. save a copy such as a 256256 optimised PNG or a TGA file for example) you can make a judgement based on where the texture will be seen and where it is mapped.
- Store your output texture files together in your Unity project for example: \Assets\textures
- Make sure your 3D working file is referring to the same textures for consistency when you save/export.
- Make use of the available space in your texture, but be aware of different materials requiring different parts of the same texture can end up using loading that texture multiple times.
- For alpha (cutout) and elements that may require different shaders, separate the textures. E.g. the single texture below (left) has been replaced by 3 smaller textures below (right)

1 texture (left) vs 3 textures (right)
- Make use of tiling textures (which seamlessly repeat) then you can use better resolution repeating over space.
- Remove easily noticeable repeating elements from your bitmap, and be careful with contrast if you want to add details use decals and objects to break up the repeats.

Tiling textures ftw
- Unity takes care of compression for the output platform, so unless your source is already a JPG of the correct resolution its better to use a lossless format for your textures.
- When creating a texture page from photographs, reduce the page to individual modular sections that can repeat, e.g. you dont need 12 of the same windows using up texture space. That way you can have more pixel detail for that one window.

Do you need ALL those windows?
Materials
- Organise and name the materials in your scene this way you can find and edit your materials in Unity more easily when theyve imported
- You can choose to create materials in Unity from either:
- <modelname>-< material name> or:
- <texture name> make sure you are aware of which you want.
- Settings for materials in your native package will not all be imported to unity:
- Diffuse Colour, Diffuse texture and Names are usually supported
- Shader model, specular, normal, other secondary textures and substance material settings will not be recognised/imported (coming in 3.5)
Import/Export
Unity can use two types of files: Saved 3D application files and Exported 3D formats which you decide to use can be quite important:
Saved application files
Unity can import, through conversion: Max, Maya, Blender, Cinema4D, Modo, Lightwave & cheetah3D files, e.g. .MAX, .MB, .MA etc. see more in Importing Objects
Advantages:
- Quick iteration process (save and Unity updates)
- Simple initially
Disadvantages:
- A licensed copy of that software must be installed on all machines using the Unity project
- Files can become bloated with unnecessary data
- Big files can slow Unity updates
- Less Validation harder to troubleshoot problems
Exported 3D formats
Unity can also read FBX, OBJ, 3DS, DAE & DXF files for a general export guide you can refer to this section this section
Advantages:
- Only export the data you need
- Verify your data (re-import into 3D package) before Unity
- Generally smaller files
- Encourages modular approach
Disadvantages:
- Can be slower pipeline or prototyping and iterations
- Easier to lose track of versions between source(working file) and game data (exported FBX)
HOWTO-importObject
Unity では、最も一般的な 3D アプリケーションからのインポートをサポートします。 以下から作業しているアプリケーションを選択します。
その他のアプリケーション
Unity は .FBX、.dae、.3DS、.dxf および .obj__ ファイルを読み込めますが、プログラムでこれらの形式をエクスポートできるかどうかチェックしましょう。 一般的な 3D パッケージの FBX エクスポータは、here にあります。 パッケージの多くで、Collada エクスポータを使用できます。
ヒント
- エクスポートされたメッシュの隣にある Textures と呼ばれるフォルダにテクスチャを格納する。 これにより、Unity は常にテクスチャを検索し、そのテクスチャをマテリアルに自動的に接続できます。 詳細については、Textures を参照してください。
以下も併せて参照してください。
- Modeling Optimized Characters
- How do I use normal maps?
- Mesh Import Settings
- How do I fix the rotation of an imported model?
HOWTO-ImportObjectMaya
Unity はネイティブで Maya ファイルをインポートします。 始めるには、プロジェクトの Assets フォルダ内に.mbまたは.maファイルを置くだけです。 Unity に戻ると、シーンが自動的にインポートされ、プロジェクト ビューに表示されます。
Unity でモデルを確認するには、Project View から Scene View または Hierarchy View にオブジェクトをドラッグするだけです。
Unity は現在、Maya から以下のものをインポートします。
- 位置、回転、スケールのあるすべてのノード。 回転軸および名前もインポートされます。
- 頂点色のあるメッシュ、法線および 2 つまでの UV セット
- テクスチャとディフューズ色のあるマテリアル。 メッシュごとの複数のマテリアル。
- アニメーション FK & IK
- ボーンベースのアニメーション
Unity は、ブレンド形状はインポートしません。 代わりに、ボーンベースのアニメーションを使用します。 Unity はインポート時に、自動的にポリゴン メッシュを三角形にするため、Maya では手動でこの作業を行う必要はありません。
IK を使用して、キャラクターをアニメート化する場合、Project View でインポートした .mb ファイルを選択し、Inspector の Import Settings ダイアログで「Bake IK & Simulation」を選択します。
要件
Maya .mb および .ma ファイルをインポートするには、Unity を使用しているマシンに Maya をインストールして、.mb/.ma ファイルをインポートします。 Maya 8.0 以降がサポートされています。
インポート プロセスの裏 (詳細)
Unity は Maya ファイルをインポートする際、バックグラウンドで Maya を起動します。 Unity が Maya と通信し、.mb ファイルを Unity が読み込める形式へと変換します。 Unity に最初に Maya ファイルをインポートする際、Maya をコマンド行プロセスで起動しますが、これには約 20 秒かかりますが、次のインポートは非常に高速で行われます。
トラブルシューティング
- シーンをシンプルにしたまま、Unity で必要なオブジェクトのみを含むファイルを試してみてください。
- メッシュで問題が生じた場合、パッチ、NURBS 面などをポリゴンに変換したかを確認します (Modify > Convert + also Mesh > Quadragulate/Triangulate)。Unity はポリゴンのみサポートしています。
- Maya は、ノード履歴を台無しにすることはめったにありません。ノード履歴が台無しになると、モデルが正しくエクスポートできなくなることがあります。 幸い、 を選択して、この問題を非常に簡単に修復することができます。
- Unity は、可能な限り最新の FBX を維持しようとしますが、そのため、モデルのインポートに関する問題がある場合、Autodesk website から最新版のFBXエクスポーターを確認するか、FBX 2012に戻すことをしてください。
- Maya でのアニメーション ベーキングは、ネイティブではなく、FBX で行われます。これにより、より複雑なアニメーションを適切に FBX 形式にベークできます。 駆動キーを使用している場合、アニメーションを正しくベークするには、ドライバに少なくとも 1 つのキーを設定すること。
HOWTO-ImportObjectCinema4D
Unity はネイティブで Cinema 4D ファイルをインポートします。 始めるには、プロジェクトの Assets フォルダ内に.c4dファイルを置くだけです。 Unity に戻ると、シーンが自動的にインポートされ、Project View に表示されます。
Unity でモデルを確認するには、プロジェクト ビューから Scene View にオブジェクトをドラッグするだけです。
.c4d ファイルを修正するには、Unityはファイル保存時に自動的に更新を行います。
Unity は現在、以下のものをインポートします。
- 位置、回転、スケールのあるすべてのオブジェクト。 回転軸および名前もインポートされます。
- UV および法線のあるメッシュ。
- テクスチャとディフューズ色のあるマテリアル。 メッシュごとの複数のマテリアル。
- アニメーション FK (IK は手動でベークする必要があります)。
- ボーンベースのアニメーション
Unity は、現在 Point Level Animations (PLA) をインポートしません。 代わりに、ボーンベースのアニメーションを使用します。
IK を使用したアニメート化キャラクター
IK を使用して、Cinema 4D でキャラクターをアニメート化する場合、 メニューを使用してエクスポートする前に IK をベークする必要があります。 Unity にインポートする前に、IK をベークしない場合、おそらくアニメート化ロケータを得るだけで、アニメート化ボーンを得ることはないでしょう。
要件
- .c4dファイルをインポートするには、Cinema 4D バージョン 8.5 以上をインポートする必要があります。
マシンに Cinema 4D をインストールせず、別のマシンから Cinema 4D ファイルをインポートしたい場合、Unity がネイティブにインポートできる FBX 形式をエクスポートできます。
- Cinema 4D ファイルを開きます。
- Cinema 4D で、 を選択します。
- Unity の プロジェクトの Assets フォルダに fbx ファイルを置いてください。 Unity は、自動的に fbx ファイルをインポートします。
ヒント
- Cinema 4D ファイルをインポートする際に、インポート速度を最大化するには、 Cinema 4D 設定 () に移動し、FBX 6.0 設定を選択します。 「Embed Textures」のチェックを外します。
インポート プロセスの裏 (詳細)
Unity は Cinema 4D ファイルをインポートする際、Cinema 4D プラグインをインストールし、バックグラウンドで Cinema 4D を起動します。 Unity が Cinema 4D と通信し、.c4d ファイルを Unity が読み込める形式へと変換します。 最初に .c4d ファイルと Cinema 4D がまだ開いていない場合、起動に少し時間がかかりますが、その後.c4d ファイルは非常に高速でインポートされます。
Cinema 4D 10 サポート
.c4d ファイルを直接インポートする際、Unity はシーンの裏で、Cinema 4D にそのファイルを FBX に変換させます。 Cinema 4D 10.0 を出荷した時に、FBX エクスポータが激しく損傷しました。 Cinema 4D 10.1 では、これらの問題の多くは修正されました。 従って、Cinema 4D 10 を 10.1 にアップグレードすることを強くお勧めします。
Maxons FBX エクスポータには幾つかの問題が残っています。 現在は、Cinema 4D 10 で導入されたジョイントを使用するアニメート化キャラクターをエクスポートする信頼性の高い方法はないように見えます。しかし、9.6 で使用できる古いボーン システムは完全にエクスポートします。 従って、アニメート化キャラクターを作成する際には、ジョイントの代わりに、古いボーン システムを使用することが重要です。
Page last updated: 2012-11-09HOWTO-ImportObjectMax
3dsMax で 3D オブジェクトを作成するには、Project に直接 .max ファイルを保存するか、Autodesk .FBX か一般的なフォーマットを使用して、Unity にこれらをエクスポートできます。
Unity は現在、3ds Max から以下のものをインポートします。Saving a Max file or exporting a generic 3D file type each has advantages and disadvantages see Mesh
- 位置、回転、スケールのあるすべてのノード。 回転軸および名前もインポートされます。
- 頂点色のあるメッシュ、法線および two UV sets
- ディフューズ色とテクスチャのあるマテリアル。 メッシュごとの複数のマテリアル。
- アニメーション
- Bone based animations.
3DS Max から FBX にエクスポートするには、
- Autodesk website から最新の fbx エクスポータをダウンロードし、インストールしてください。
- .fbx 形式で、シーンまたは選択したオブジェクトをエクスポートします ( または )。 デフォルトのエクスポートオプションの使用も大丈夫です。
- Unity のプロジェクト フォルダにエクスポートされた fbx ファイルをコピーします。
- Unity の戻す場合は、.fbxファイルは自動的にインポートされます。
- Project View から Scene View にファイルをドラッグします。
エクスポータ オプション
(基本的にすべてをエクスポートする) デフォルトの FBX エクスポータ オプションの選択が可能です。
Embed textures - this stores the image maps in the file, good for portability, not so good for file size

デフォルトの FBX エクスポータ オプション ( fbx プラグイン バージョン 2013.3 の場合)
# ボーンベースのアニメーションのエクスポート
これは、ボーンベースのアニメーションをエクスポートしたい時に従う手順です。
- 自由にボーン構造を設定します。
- FK および/または IK を使用して、好きなアニメーションを作成します。
- すべてのボーンおよび/または IK ソルバーを選択します。
- に移動し、 を押します。 Unity は、キー フィルタを作成するので、エクスポートするキーの数は関係ありません。
- 最も新しい FBX 形式として「Export」または「Export selected」を選択します。
- 通常通り、Assets に FBX ファイルをドロップします。
- Unity で、ルート ボーンにテクスチャをマテリアルにサイド割り当てる必要があります。
Unity に 3ds Max からメッシュやアニメーションのあるボーン階層をエクスポートする際、生成された GameObject 階層は、3ds Max の「Schematic view」で確認できる階層に対応しています。1 つ違うところは、Unity は、GameObject をアニメーションを含む、新しいルートとして配置し、メッシュやマテリアル情報をルート ボーンに置きます。
アニメーションおよびメッシュ情報を同じ Unity のGameObject に置きたい場合、3ds Max の階層ビューに移動し、メッシュ ノードをボーン階層のボーンにパレンディングします。
Lightmapping への 2 つの UV セットのエクスポート
3ds Max の Render To Texture および自動アンラッピング機能を使用して、ライト マップを作成できます。 Unity には組み込み lightmapperありますが、それがワークフローにより適する場合は、3dsmax を使用する方がよい場合があります。 通常、1 つの UV セットを、メイン テクスチャおよび/または法線マップに使用され、別の UV セットが、ライトマップ テクスチャに使用されます。 両方の UV セットが適切に動作するには、3ds Max のマテリアルは、標準である必要があり、デフューズ (メイン テクスチャの場合) およびセルフ イルミネーション (ライトマップの場合) の両方のマップ スロットを設定する必要があります。

「セルフ イルミネーション マップを使用した 3ds Max での Lightmapping に設定されたマテリアル」
オブジェクトが、マテリアル タイプの Shell を使用すると、現在の Autodesk ので FBX エクスポートは、UV を正しくエクスポートしません。
または、以下に示すように、デフューズ マップのメイン テクスチャとライトマップを施用して、Multi/Sub Object マテリアル タイプを使用し、2 つのサブマテリアルを設定できます。 しかし。モデルの顔が異なるサブマテリアルの ID を使用する場合、これにより、複数のマテリアルがインポートされますが、パフォーマンスには最適ではありません。

「Multi/Sub Object オブジェクトを使用した 3ds Max での Lightmapping に設定された別のマテリアル」
トラブルシューティング
一部のモデルのインポートに問題がある場合は、 最新の FBX プラグイン(Autodesk website からインストールできます) がインストールされていることを確認し、それでもダメであれば、FBX 2012に戻してみて下さい。
Page last updated: 2012-11-09HOWTO-ImportObjectCheetah3D
Unity はネイティブで Cheetah3D ファイルをインポートします。 始めるには、プロジェクトの Assets フォルダ内に.jas ファイルを置くだけです。 Unity に戻ると、シーンが自動的にインポートされ、Project View に表示されます。
Unity でモデルを確認するには、プロジェクト ビューから Scene View にオブジェクトをドラッグするだけです。
.jas ファイルを修正するには、Unityはファイル保存時に自動的に更新を行います。
Unity は現在、Cheetah3D から以下のものをインポートします。
- 位置、回転、スケールのあるすべてのノード。 回転軸および名前もインポートされます。
- 頂点、ポリゴン、接線、UV および法線のあるメッシュ。
- アニメーション
- ディフューズ色とテクスチャのあるマテリアル。
要件
- 少なくとも Cheetah3D 2.6をインストールしている必要があります。
HOWTO-ImportObjectModo
Unity はネイティブで modo ファイルをインポートします。 これは、COLLADA エクスポータを使用して、内部で機能します。 Modo バージョン 501 以降でこの手法が使用されます。 始めるには、.lxo ファイルをプロジェクトの Assets フォルダに保存します。 Unity に戻ると、ファイルが自動的にインポートされ、プロジェクト ビューに表示されます。
501 以前の古いバージョンの modo の場合、Unity プロジェクト フォルダに Modo シーンを FBX または COLLADA ファイルとして保存します。 Unity に戻ると、シーンが自動的にインポートされ、Project View に表示されます。
Unity でモデルを確認するには、プロジェクト ビューから Scene View にオブジェクトをドラッグします。
lxo ファイルを修正するには、Unityはファイル保存時に自動的に更新を行います。
Unity は現在、以下のものをインポートします。
- 位置、回転、スケールのあるすべてのノード。 回転軸および名前もインポートされます。
- 頂点、法線および UV のあるメッシュ。
- テクスチャとディフューズ色のあるマテリアル。 メッシュごとの複数のマテリアル。
- アニメーション
要件
- *.lxo ファイルのネイティブなインポートには、modo 501 以降が必要です。
HOWTO-importObjectLightwave
Lightwave 向けの FBX プラグインを使用して、Lightwave からメッシュやアニメーションをインポートできます。
Unity は現在、以下のものをインポートします。
- 位置、回転、スケールのあるすべてのノード。 回転軸および名前もインポートされます。
- UV および法線のあるメッシュ。
- テクスチャとディフューズ色のあるマテリアル。 メッシュごとの複数のマテリアル。
- アニメーション
- ボーンベースのアニメーション
インストール
最新の Lightwave FBX エクスポータは以下からダウンロードできます。
- OS X lighwave 8.2 and 9.0 plugin
- OS X Lighwave 8.0 plugin
- Windows Lighwave 8.2 and 9.0 plugin
- Windows Lighwave 8.0 plugin
これらのプラグインをダウンロードすることで、this licence に自動的に合意します。
プラグインには 2 つのバージョンがあり、1 つは、LightWave 8.0 用、1 つは LightWave 8.2 から 9.0 用になります。 Make sure you install the correct version.
Mac 向けのプラグインは、OS X パッケージと同梱しています。 このパッケージをダブルクリックして、インストールすると、インストーラが正しいフォルダにこれを置きます。 LightWave プラグイン フォルダが見つからない場合、 フォルダに自身の LightWave フォルダを作成し、そこでそのフォルダをダンプします。 後者が発生した場合、LightWave プラグイン フォルダ (またはサブ フォルダ) に移動する必要があります。 「Edit Plugins」パネル () を介して、LightWave にプラグインを追加する必要があります。プラグインの追加方法の詳細については、LightWave マニュアルを参照してください。

LightWave に追加されると、このプラグインには、一般メニュー タブ (ユーティリティ上) を介してアクセスできます。 一般メニューが表示されない場合、コンフィグ メニュー パネルを使用して、追加する必要があります。 後者のパネルで、プラグイン カテゴリ内で確認できます。このプラグインは「一般プラグイン」と呼ばれます。 便利なメニューに追加してください (この方法の詳細については、LightWave マニュアルを参照してください)。
インストールに関する詳細については、インストーラと共にダウンロードできるリリース ノートに記載されています。
エクスポート
All objects and animations have to be exported from Layout (there is no Modeler FBX exporter).
1. Select Export to FBX from the Generics menu

2. Select the appropriate settings in the fbx export dialog
- fbx ファイル名を選択します。 現在の Unity プロジェクトの Assets フォルダにエクスポートした fbx ファイルを保存してください。
- FBX ダイアログ パネルで、「Embed Textures」を選択する必要があります、そうでないと、エクスポートされたオブジェクトに UV はありません。 これは LightWave vbx エクスポータの問題で、Autodesk に応じて、今後のバージョンで修正されます。
- Unity にアニメーションをエクスポートしたい場合、「Animations」にチェックを入れる必要があります。 「Lights」または「Cameras」にもチェックを入れる必要があります。
- Unity のエクスポートされたアニメーション クリップの名前を変更するには、「LW Take 001」から好きな名前に変更します。

3. Unity に切り替えます。
- Unity は自動的に fbx ファイルをインポートし、テクスチャのマテリアルを生成します。
- インポートした fbx ファイルをプロジェクト ビューからシーン ビューにドラッグします。
注意
- エクスポート時または UV をエクスポートしない場合は、FBX パネルで「Embed Textures」を選択する必要があります。
- Unity にアニメーションをエクスポートしたい場合、「Animations」および「Camera」または「Lights」のいずれかを有効にする必要があります。
- テクスチャは必ず fbx ファイルの隣にある「Textures」フォルダに置くことを強くお勧めします。 これにより、Unity は常にテクスチャを検索し、そのテクスチャをマテリアルに自動的に接続できます。
HOWTO-ImportObjectBlender
Unity はネイティブで Blender ファイルをインポートします。 これは、バージョン 2.45 の Blender に追加された、 Blender FBX エクスポータを使用して、フードの下で機能します。 このため、Blender 2.45 以降を更新する必要があります (ただし、下記要件を参照)。
始めるには、.blendファイルをプロジェクトの Assets フォルダに保存します。 Unity に戻ると、ファイルが自動的にインポートされ、$Project View$$ に表示されます。
Unity でモデルを確認するには、プロジェクト ビューから Scene View にオブジェクトをドラッグします。
.blend ファイルを修正するには、Unityはファイル保存時に自動的に更新を行います。
Unity は現在、以下のものをインポートします。
- 位置、回転、スケールのあるすべてのノード。 回転軸および名前もインポートされます。
- 頂点、ポリゴン、接線、UV および法線のあるメッシュ。
- ボーン
- スキン メッシュ
- アニメーション
要件
- Blender 2.45-2.49 または 2.58 以降 (バージョン 2.50-2.57 は、Blender で FBX エクスポートが変更または破損しているため)。
- テクスチャとデフューズいろは自動的に割り当てられません。 Unity のシーン内でテクスチャをメッシュにドラッグして、手動で割り当てます。
Workflow
- Mono Develop を始める
- プロジェクト間でアセットをどのように再利用するか?
- 標準アセットはどのようにインストールまたはアップグレードしますか?
- Porting a Project Between Platforms
HOWTO-MonoDevelop
Mono Develop は Unity 3.x に同梱されており、この IDE は、ゲームのスクリプティング部分およびそのデバッグを行うのを手伝います。
Mono Develop の設定
Unity と連携するよう、Mono Develop を設定するには、Unity 設定に移動し、デフォルトのエディタとして設定する必要があります。

Mono Develop をデフォルトのエディタとして設定
この後、既存のプロジェクトを作成または開いて、 をクリックして、Mono Develop とプロジェクトを同期させます。

Mono Develop の同期
これにより、Mono Develop でプロジェクトが開きます (スクリプティング ファイルのみで、アセットは開きません)。 これで、debugging を開始する準備ができました。
プロジェクトの設定に問題がある場合、troubleshooting page にもアクセスしてみてください。
Page last updated: 2012-11-09HOWTO-exportpackage
ゲームを作成する際、Unity はアセットに関する多くのメタデータを格納する (インポート設定、その他のアセットへのリンクなど)。 アセットを異なるプロジェクトに移したい場合、特定の方法があります。 次の方法で、この情報をすべて保ちつつ、プロジェクト間をアセットを簡単に移動できます。
- Project View で、エクスポートしたいすべてのアセット ファイルを選択します。
- メニューバーから を選択します。
- パッケージに名前を付け、好きな場所に保存します。
- アセットを移したいプロジェクトを開きます。
- メニューバーから を選択します。
- ステップ 3 で保存したパッケージ ファイルを選択します。
ヒント
- パッケージをエクスポートする際、Unity はすべての依存関係もエクスポートできます。 そのため、例えば、Scene を選択し、すべての依存関係と共にパッケージをエクスポートする場合、シーンに表示されるすべてのモデルやテクスチャ、その他のアセットもエクスポートされます。 これにより、すべて手動で突き止めることなく、アセットの集合を素早くエクスポートできます。
- Unity アプリケーションの隣にある Standard Packages フォルダにエクスポートされた パッケージを格納する場合、「Create New Project」ダイアログに表示されます。
HOWTO-InstallStandardAssets
Unity には、複数の Standard Assets パッケージが同梱されています。 これらは、大半の Unity ユーザーによって広く使用されるアセットの集合です。 プロジェクト ウィザードからプロジェクトを新規作成する際に、これらのアセットの集合を選択的に含めることができます。 これらのアセットは、Unity のインストール フォルダから、新しいプロジェクトにコピーされます。 つまり、Unity を新しいバージョンにアップグレードする場合、これらのアセットの新しいバージョンは得られないため、これらのアセットをアップグレードする必要があります。 また、効果などの新しいバージョンは、パフォーマンスまたは画質上の理由から、動作が異なる場合があるため、パラメータを再度微調整する必要があります。 ゲームの見かけや動作が突然異なって表示させたくない場合、アップグレード前にこれを検討することが重要です。 パッケージの内容および Unity のリリースノートを参照してください。
標準アセットには、一人称コントローラ、スカイボックス、レンズ フレア、Water prefabs, Image Effects などの便利なものが含まれます。

「プロジェクト新規作成時に一覧表示される標準アセット パッケージ」
アップグレード
例えば、Unity の新しいバージョンには、新しい標準アセットが同梱するために、標準アセットをアップグレードしたい場合があります。
- プロジェクトを開きます。
- サブメニューからアップデートしたいパッケージを選択します。
- 新しいまたは交換したアセットのリストが表示されるので、 をクリックします。
最もクリーンなアップグレードの場合、一部のスクリプト、効果またはプレハブが廃止される可能性があるか、不要になる可能性があり、Unity のパッケージには、(不要な) ファイルを削除する方法がないため、最初に古いパッケージの内容を削除することを検討する必要があります (使用できる古いバージョンの安全コピーを取っておいて下さい)。
Page last updated: 2012-11-09HOWTO-PortingBetweenPlatforms
Most of Unity's API and project structure is identical for all supported platforms and in some cases a project can simply be rebuilt to run on different devices. However, fundamental differences in the hardware and deployment methods mean that some parts of a project may not port between platforms without change. Below are details of some common cross-platform issues and suggestions for solving them.
Input
The most obvious example of different behaviour between platforms is in the input methods offered by the hardware.
Keyboard and joypad
The Input.GetAxis function is very convenient on desktop platforms as a way of consolidating keyboard and joypad input. However, this function doesn't make sense for the mobile platforms which rely on touchscreen input. Likewise, the standard desktop keyboard input doesn't port over to mobiles well for anything other than typed text. It is worthwhile to add a layer of abstraction to your input code if you are considering porting to other platforms in the future. As a simple example, if you were making a driving game then you might create your own input class and wrap the Unity API calls in your own functions:-
// Returns values in the range -1.0 .. +1.0 (== left .. right).
function Steering() {
return Input.GetAxis("Horizontal");
}
// Returns values in the range -1.0 .. +1.0 (== accel .. brake).
function Acceleration() {
return Input.GetAxis("Vertical");
}
var currentGear: int;
// Returns an integer corresponding to the selected gear.
function Gears() {
if (Input.GetKeyDown("p"))
currentGear++;
else if (Input.GetKeyDown("l"))
currentGear--;
return currentGear;
}
One advantage of wrapping the API calls in a class like this is that they are all concentrated in a single source file and are consequently easy to locate and replace. However, the more important idea is that you should design your input functions according to the logical meaning of the inputs in your game. This will help to isolate the rest of the game code from the specific method of input used with a particular platform. For example, the Gears function above could be modified so that the actual input comes from touches on the screen of a mobile device. Using an integer to represent the chosen gear works fine for all platforms, but mixing the platform-specific API calls with the rest of the code would cause problems. You may find it convenient to use platform dependent compilation to combine the different implementation of the input functions in the same source file and avoid manual swaps.
Touches and clicks
The Input.GetMouseButtonXXX functions are designed so that they have a reasonably obvious interpretation on mobile devices even though there is no "mouse" as such. A single touch on the screen is reported as a left click and the Input.mousePosition property gives the position of the touch as long as the finger is touching the screen. This means that games with simple mouse interaction can often work transparently between the desktop and mobile platforms. Naturally, though, the conversion is often much less straightforward than this. A desktop game can make use of more than one mouse button and a mobile game can detect multiple touches on the screen at a time.
As with API calls, the problem can be managed partly by representing input with logical values that are then used by the rest of the game code. For example, a pinch gesture to zoom on a mobile device might be replaced by a plus/minus keystroke on the desktop; the input function could simply return a float value specifying the zoom factor. Likewise, it might be possible to use a two-finger tap on a mobile to replace a right button click on the desktop. However, if the properties of the input device are an integral part of the game then it may not be possible to remodel them on a different platform. This may mean that game cannot be ported at all or that the input and/or gameplay need to be modified extensively.
Accelerometer, compass, gyroscope and GPS
These inputs derive from the mobility of handheld devices and so may not have any meaningful equivalent on the desktop. However, some use cases simply mirror standard game controls and can be ported quite easily. For example, a driving game might implement the steering control from the tilt of a mobile device (determined by the accelerometer). In cases like this, the input API calls are usually fairly easy to replace, so the accelerometer input might be replaced by keystrokes, say. However, it may be necessary to recalibrate inputs or even vary the difficulty of the game to take account of the different input method. Tilting a device is slower and eventually more strenuous than pressing keys and may also make it harder to concentrate on the display. This may result in the game's being more difficult to master on a mobile device and so it may be appropriate to slow down gameplay or allow more time per level. This will require the game code to be designed so that these factors can be adjusted easily.
Memory, storage and CPU performance
Mobile devices inevitably have less storage, memory and CPU power available than desktop machines and so a game may be difficult to port simply because its performance is not acceptable on lower powered hardware. Some resource issues can be managed but if you are pushing the limits of the hardware on the desktop then the game is probably not a good candidate for porting to a mobile platform.
Movie playback
Currently, mobile devices are highly reliant on hardware support for movie playback. The result is that playback options are limited and certainly don't give the flexibility that the MovieTexture asset offers on desktop platforms. Movies can be played back fullscreen on mobiles but there isn't any scope for using them to texture objects within the game (so it isn't possible to display a movie on a TV screen within the game, for example). In terms of portability, it is fine to use movies for introductions, cutscenes, instructions and other simple pieces of presentation. However, if movies need to be visible within the game world then you should consider whether the mobile playback options will be adequate.
Storage requirements
Video, audio and even textures can use a lot of storage space and you may need to bear this in mind if you want to port your game. Storage space (which often also corresponds to download time) is typically not an issue on desktop machines but this is not the case with mobiles. Furthermore, mobile app stores often impose a limit on the maximum size of a submitted product. It may require some planning to address these concerns during the development of your game. For example, you may need to provide cut-down versions of assets for mobiles in order to save space. Another possibility is that the game may need to be designed so that large assets can be downloaded on demand rather than being part of the initial download of the application.
Automatic memory management
The recovery of unused memory from "dead" objects is handled automatically by Unity and often happens imperceptibly on desktop machines. However, the lower memory and CPU power on mobile devices means that garbage collections can be more frequent and the time they take can impinge more heavily on performance (causing unwanted pauses in gameplay, etc). Even if the game runs in the available memory, it may still be necessary to optimise code to avoid garbage collection pauses. More information can be found on our memory management page.
CPU power
A game that runs well on a desktop machine may suffer from poor framerate on a mobile device simply because the mobile CPU struggles with the game's complexity. Extra attention may therefore need to be paid to code efficiency when a project is ported to a mobile platform. A number of simple steps to improve efficiency are outlined on this page in our manual.
Page last updated: 2012-05-31MobileDeveloperChecklist
If you are having problems when developing for a mobile platform, this is a checklist to help you solve various problems.
Page last updated: 2012-10-10MobileCrashes
Checklist for crashes
- Disable code stripping (and set slow with exceptions for iOS)
- Follow the instructions on Optimizing the Size of the Built iOS Player (http://docs.unity3d.com/Documentation/Manual/iphone-playerSizeOptimization.html) to make sure your game does not crash with stripping on iOS.
- Verify it is not because of out of memory (restart your device, use the device with maximum RAM for the platform, be sure to watch the logs)
Editor.log - on the editor
The Debug messages, warnings and errors all go to the console. Also Unity prints status reports to the console loading assets, initializing mono, graphics driver info.
If you are trying to understand what is going on look at the editor.log. Here you will get the full picture, not just a console fragment. You can try to understand whats happening, and watch the full log of your coding session. This will help you track down what has caused Unity crash to crash or find out whats wrong with your assets.
Unity prints some tjings on the devices as well; Logcat console for android and Xcode gdb console on iOS devices

Android
Debugging on Android
- Use the DDMS or ADB tool
- Watch the stacktrace (Android 3 or newer). Either use c++filt (part of the ndk) or the other methods, like: http://slush.warosu.org/c++filtjs to decode the mangled function calls
- Look at the .so file that the crash occurs on:
- libunity.so - the crash is in the Unity code or the user code
- libdvm.so - the crash is in the Java world, somewhere with Dalvik. So find Dalviks stacktrace, look at your JNI code or anything Java-related (including your possible changes to the AndroidManifest.xml).
- libmono.so - either a Mono bug or you're doing something Mono strongly dislikes
- If the crashlog does not help you can disassemble it to get a rough understanding of what has happened.
- use ARM EABI tools from the Android NDK like this: objdump.exe -S libmono.so >> out.txt
- Look at the code around pc from the stacktrace.
- try to match that code within the fresh out.txt file.
- Scroll up to understand what is happening in the function it occurs in.

iOS
Debugging on iOS
- Xcode has built in tools. Xcode 4 has a really nice GUI for debugging crashes, Xcode 3 has less.
- Full gdb stack - thread apply all bt
- Enable soft-null-check:
Enable development build and script debugging. Now uncaught null ref exceptions will be printed to the Xcode console with the appropriate managed call stack.
- Try turning the "fast script call" and code stripping off. It may stop some random crashes, like those caused by using some rare .Net functions or reflection.
Strategy
- Try to figure out which script the crash happens in and debug it using mono develop on the device.
- If the crash seems to not be in your code, take a closer look at the stacktrace, there should be a hint of something happening. Take a copy and submit it, and well take a look.
MobileProfiling
Ports that the Unity profiler uses:
MulticastPort : 54998 ListenPorts : 55000 - 55511 Multicast(unittests) : 55512 - 56023
They should be accessible from within the network node. That is, the devices that youre trying to profile on should be able to see these ports on the machine with the Unity Editor with the Profiler on.
First steps
Unity relies on the CPU (heavily optimized for the SIMD part of it, like SSE on x86 or NEON on ARM) for skinning, batching, physics, user scripts, particles, etc.
The GPU is used for shaders, drawcalls, image effects.
CPU or GPU bound
- Use the internal profiler to detect the CPU and GPU ms
Pareto analysis
A large majority of problems (80%) are produced by a few key causes (20%).
- Use the Editor profiler to get the most problematic function calls and optimize them first.
- Make sure the scripts run only when necessary.
- Use OnBecameVisible/OnBecameInvisible to disable inactive objects.
- Use coroutines if you dont need some scripts to run every frame.
// Do some stuff every frame:
void Update () {
}
//Do some stuff every 0.2 seconds:
IEnumerator Start ()_ {
while (true) {
yield return new WaitForSeconds (0.2f);
}
}
- Use the .NET System.Threading.Thread class to put heavy calculations to the other thread. This allows you to run on multiple cores, but Unity API is not thread-safe. So buffer inputs and results and read and assign them on the main thread.
CPU Profiling
Profile user code
Not all of the user code is shown in the Profiler. But you can use Profiler.BeginSample and Profiler.EndSample to make the required user code appear in the profiler.
GPU Profiling
The Unity Editor profiler cannot show GPU data as of now. Were working with hardware manufacturers to make it happen with the Tegra devices being the first to appear in the Editor profiler.

iOS
Tools for iOS
- Unity internal profiler (not the Editor profiler). This shows the GPU time for the whole scene.
- PowerVR PVRUniSCo shader analyzer. See below.
- iOS: Xcode OpenGL ES Driver Instruments can show only high-level info:
- Device Utilization % - GPU time spent on rendering in total. >95% means the app is GPU bound.
- Renderer Utilization % - GPU time spent drawing pixels.
- Tiler Utilization % - GPU time spent processing vertices.
- Split count - the number of frame splits, where the vertex data didnt fit into allocated buffers.
PowerVR is tile based deferred renderer, so its impossible to get GPU timings per draw call. However you can get GPU times for the whole scene using Unitys built-in profiler (the one that prints results to Xcode output). Apples tools currently can only tell you how busy the GPU and its parts are, but do not give times in milliseconds.
PVRUniSCo gives cycles for the whole shader, and approximate cycles for each line in the shader code. Windows & Mac! But it wont match what Apples drivers are doing exactly anyway. Still, a good ballpark measure.

Android
Tools for Android
- Adreno (Qualcomm)
- NVPerfHUD (NVIDIA)
- PVRTune, PVRUniSCo (PowerVR)
On Tegra, NVIDIA provides excellent performance tools which does everything you want - GPU time per draw call, Cycles per shader, Force 2x2 texture, Null view rectangle, runs on Windows, OSX, Linux. PerfHUD ES does not easily work with consumer devices, you need the development board from NVIDIA.
Qualcomm provides excellent Adreno Profiler (Windows only) which is Windows only, but works with consumer devices! It features Timeline graphs, frame capture, Frame debug, API calls, Shader analyzer, live editing.
Graphics related CPU profiling
The internal profiler gives a good overview per module:
- time spent in OpenGL ES API
- batching efficiency
- skinning, animations, particles
Memory
Integrate this: http://docwiki.hq.unity3d.com/internal/index.php?n=Support.MemoryUsage
There is Unity memory and mono memory.
Mono memory
Mono memory handles script objects, wrappers for Unity objects (game objects, assets, components, etc). Garbage Collector cleans up when the allocation does not fit in the available memory or on a System.GC.Collect() call.
Memory is allocated in heap blocks. More can allocated if it cannot fit the data into the allocated block. Heap blocks will be kept in Mono until the app is closed. In other words, Mono does not release any memory used to the OS (Unity 3.x). Once you allocate a certain amount of memory, it is reserved for mono and not available for the OS. Even when you release it, it will become available internally for Mono only and not for the OS. The heap memory value in the Profiler will only increase, never decrease.
If the system cannot fit new data into the allocated heap block, the Mono calls a "GC" and can allocate a new heap block (for example, due to fragmentation).
Too many heap sections means youve run out of Mono memory (because of fragmentation or heavy usage).
Use System.GC.GetTotalMemory to get the total used Mono memory.
The general advice is, use as small an allocation as possible.
Unity memory
Unity memory handles Asset data (Textures, Meshes, Audio, Animation, etc), Game objects, Engine internals (Rendering, Particles, Physics, etc). Use Profiler.usedHeapSize to get the total used Unity memory.
Memory map
No tools yet but you can use the following.
- Unity Profiler - not perfect, skips stuff, but you can get an overview. It works on the device!
- Internal profiler
- Shows Used heap and allocated heap - see mono memory.
- Shows the number of mono allocations per frame.
- Xcode tools - iOS
- Xcode Instruments Activity Monitor - Real Memory column.
- Xcode Instruments Allocations - net allocations for created and living objects.
- VM Tracker
- textures usually get allocated with IOKit label.
- meshes usually go into VM Allocate.
- Make your own tool
- FindObjectsOfTypeAll (type : Type) : Object[]
- FindObjectsOfType (type : Type): Object[]
- GetRuntimeMemorySize (o : Object) : int
- GetMonoHeapSize
- GetMonoUsedSize
- Profiler.BeginSample/EndSample - profile your own code
- UnloadUnusedAssets () : AsyncOperation
- System.GC.GetTotalMemory/Profiler.usedHeapSize
- References to the loaded objects - There is no way to figure this out. A workaround is to Find references in scene for public variables.
Memory hiccups
- Garbage collector
- This fires when the system cannot fit new data into the allocated heap block.
- Dont use OnGUI on mobiles
- It shoots several times per frame
- It completely redraws the view.
- It creates tons of memory allocation calls that require Garbage Collection to be invoked.
- Creating/removing too many objects too quickly?
- This may lead to fragmentation.
- Use the Editor profiler to track the memory activity.
- The internal profiler can be used to track the mono memory activity.
- System.GC.Collect() You can use this .Net function when its ok to have a hiccup.
- New memory allocations
- Allocation hiccups
- Use lists of preallocated, reusable class instances to implement your own memory management scheme.
- Dont make huge allocations per frame, cache, preallocate instead
- Problems with fragmentation?
- Preallocate the memory pool.
- Keep a List of inactive GameObjects and reuse them instead of Instantiating and Destroying them.
- Out of mono memory
- Profile memory activity - when does the first memory page fill up?
- Do you really need so many gameobjects that a single memory page is not enough?
- Use structs instead of classes for local data. Classes are stored on the heap; structs on the stack.
- Allocation hiccups
class MyClass {
public int a, b, c;
}
struct MyStruct {
public int a, b, c;
}
void Update () {
//BAD
// allocated on the heap, will be garbage collected later!
MyClass c = new MyClass();
//GOOD
//allocated on the stack, no GC going to happen!
MyStruct s = new MyStruct();
}
- Read the relevant section in the manual Link to http://docs.unity3d.com/Documentation/Manual/UnderstandingAutomaticMemoryManagement.html
Out of memory crashes
At some points a game may crash with "out of memory" though it in theory it should fit in fine. When this happens compare your normal game memory footprint and the allocated memory size when the crash happens. If the numbers are not similar, then there is a memory spike. This might be due to:
- Two big scenes being loaded at the same time - use an empty scene between two bigger ones to fix this.
- Additive scene loading - remove unused parts to maintain the memory size.
- Huge asset bundles loaded to the memory
- Loading via WWW or instantiating (a huge amount of) big objects like:
- Textures without proper compression (a no go for mobiles).
- Textures having Get/Set pixels enabled. This requires an uncompressed copy of the texture in memory.
- Textures loaded from JPEG/PNGs at runtime are essentially uncompressed.
- Big mp3 files marked as decompress on loading.
- Keeping unused assets in weird caches like static monobehavior fields, which are not cleared when changing scenes.
MobileOptimisation
Just like on PCs, mobile platforms like iOS and Android have devices of various levels of performance. You can easily find a phone thats 10x more powerful for rendering than some other phone. Quite easy way of scaling:
- Make sure it runs okay on baseline configuration
- Use more eye-candy on higher performing configurations:
- Resolution
- Post-processing
- MSAA
- Anisotropy
- Shaders
- Fx/particles density, on/off
Focus on GPUs
Graphics performance is bound by fillrate, pixel and geometric complexity (vertex count). All three of these can be reduced if you can find a way to cull more renderers. Occlusion culling and could help here. Unity will automatically cull objects outside the viewing frustum.
On mobiles youre essentially fillrate bound (fillrate = screen pixels * shader complexity * overdraw), and over-complex shaders is the most common cause of problems. So use mobile shaders that come with Unity or design your own but make them as simple as possible. If possible simplify your pixel shaders by moving code to vertex shader.
If reducing the Texture Quality in Quality Settings makes the game run faster, you are probably limited by memory bandwidth. So compress textures, use mipmaps, reduce texture size, etc.
LOD (Level of Detail) make objects simpler or eliminate them completely as they move further away. The main goal would be to reduce the number of draw calls.
Good practice
Mobile GPUs have huge constraints in how much heat they produce, how much power they use, and how large or noisy they can be. So compared to the desktop parts, mobile GPUs have way less bandwidth, low ALU performance and texturing power. The architectures of the GPUs are also tuned to use as little bandwidth & power as possible.
Unity is optimized for OpenGL ES 2.0, it uses GLSL ES (similar to HLSL) shading language. Built in shaders are most often written in HLSL (also known as Cg). This is cross compiled into GLSL ES for mobile platforms. You can also write GLSL directly if you want to, but doing that limits you to OpenGL-like platforms (e.g. mobile + Mac) since there currently are no GLSL->HLSL translation tools. When you use float/half/fixed types in HLSL, they end up highp/mediump/lowp precision qualifiers in GLSL ES.
Here is the checklist for good practice:
- Keep the number of materials as low as possible. This makes it easier for Unity to batch stuff.
- Use texture atlases (large images containing a collection of sub-images) instead of a number of individual textures. These are faster to load, have fewer state switches, and are batching friendly.
- Use Renderer.sharedMaterial instead of Renderer.material if using texture atlases and shared materials.
- Forward rendered pixel lights are expensive.
- Use light mapping instead of realtime lights where ever possible.
- Adjust pixel light count in quality settings. Essentially only the directional light should be per pixel, everything else - per vertex. Certainly this depends on the game.
- Experiment with Render Mode of Lights in the Quality Settings to get the correct priority.
- Avoid Cutout (alpha test) shaders unless really necessary.
- Keep Transparent (alpha blend) screen coverage to a minimum.
- Try to avoid situations where multiple lights illuminate any given object.
- Try to reduce the overall number of shader passes (Shadows, pixel lights, reflections).
- Rendering order is critical. In general case:
- fully opaque objects roughly front-to-back.
- alpha tested objects roughly front-to-back.
- skybox.
- alpha blended objects (back to front if needed).
- Post Processing is expensive on mobiles, use with care.
- Particles: reduce overdraw, use the simplest possible shaders.
- Double buffer for Meshes modified every frame:
void Update (){
// flip between meshes
bufferMesh = on ? meshA : meshB;
on = !on;
bufferMesh.vertices = vertices; // modification to mesh
meshFilter.sharedMesh = bufferMesh;
}
Sharer optimizations
Checking if you are fillrate-bound is easy: does the game run faster if you decrease the display resolution? If yes, you are limited by fillrate.
Try reducing shader complexity by the following methods:
- Avoid alpha-testing shaders; instead use alpha-blended versions.
- Use simple, optimized shader code (such as the Mobile shaders that ship with Unity).
- Avoid expensive math functions in shader code (pow, exp, log, cos, sin, tan, etc). Consider using pre-calculated lookup textures instead.
- Pick lowest possible number precision format (float, half, fixedin Cg) for best performance.
Focus on CPUs
It is often the case that games are limited by the GPU on pixel processing. So they end up having unused CPU power, especially on multicore mobile CPUs. So it is often sensible to pull some work off the GPU and put it onto the CPU instead (Unity does all of these): mesh skinning, batching of small objects, particle geometry updates.
These should be used with care, not blindly. If you are not bound by draw calls, then batching is actually worse for performance, as it makes culling less efficient and makes more objects affected by lights!
Good practice
- Dont use more than a few hundred draw calls per frame on mobiles.
- FindObjectsOfType (and Unity getter properties in general) are very slow, so use them sensibly.
- Set the Static property on non-moving objects to allow internal optimizations like static batching.
- Spend lots of CPU cycles to do occlusion culling and better sorting (to take advantage of Early Z-cull).
Physics
Physics can be CPU heavy. It can be profiled via the Editor profiler. If Physics appears to take too much time on CPU:
- Tweak Time.fixedDeltaTime (in Project settings -> Time) to be as high as you can get away with. If your game is slow moving, you probably need less fixed updates than games with fast action. Fast paced games will need more frequent calculations, and thus fixedDeltaTime will need to be lower or a collision may fail.
- Physics.solverIterationCount (Physics Manager).
- Use as little Cloth objects as possible.
- Use Rigidbodies only where necessary.
- Use primitive colliders in preference mesh colliders.
- Never ever move a static collider (ie a collider without a Rigidbody) as it causes a big performance hit.
- Shows up in Profiler as Static Collider.Move but actual processing is in Physics.Simulate
- If necessary, add a RigidBody and set isKinematic to true.
- On Windows you can use NVidias AgPerfMon profiling tool set to get more details if needed.

Android
GPU
These are the popular mobile architectures. This is both different hardware vendors than in PC/console space, and very different GPU architectures than the usual GPUs.
- ImgTec PowerVR SGX - Tile based, deferred: render everything in small tiles (as 16x16), shade only visible pixels
- NVIDIA Tegra - Classic: Render everything
- Qualcomm Adreno - Tiled: Render everything in tile, engineered in large tiles (as 256k). Adreno 3xx can switch to traditional.
- ARM Mali Tiled: Render everything in tile, engineered in small tiles (as 16x16)
Spend some time looking into different rendering approaches and design your game accordingly. Pay especial attention to sorting. Define the lowest end supported devices early in the dev cycle. Test on them with the profiler on as you design your game.
Use platform specific texture compression.
Further reading
- PowerVR SGX Architecture Guide http://imgtec.com/powervr/insider/powervr-sdk-docs.asp
- Tegra GLES2 feature guide http://developer.download.nvidia.com/tegra/docs/tegra_gles2_development.pdf
- Qualcomm Adreno GLES performance guide http://developer.qualcomm.com/file/607/adreno200performanceoptimizationopenglestipsandtricksmarch10.pdf
- Engel, Rible http://altdevblogaday.com/2011/08/04/programming-the-xperia-play-gpu-by-wolfgang-engel-and-maurice-ribble/
- ARM Mali GPU Optimization guide http://www.malideveloper.com/developer-resources/documentation/index.php
Screen resolution
Android version

iOS
GPU
Only PowerVR architecture (tile based deferred) to be concerned about.
- ImgTec PowerVR SGX. Tile based, deferred: render everything in tiles, shade only visible pixels
- ImgTec .PowerVR MBX. Tile based, deferred, fixed function - pre iPhone 4/iPad 1 devices
This means:
- Mipmaps are not so necessary.
- Antialiasing and aniso are cheap enough, not needed on iPad 3 in some cases
And cons:
- If vertex data per frame (number of vertices * storage required after vertex shader) exceeds the internal buffers allocated by the driver, the scene has to be split which costs performance. The driver might allocate a larger buffer after this point, or you might need to reduce your vertex count. This becomes apparent on iPad2 (iOS 4.3) at around 100 thousand vertices with quite complex shaders.
- TBDR needs more transistors allocated for the tiling and deferred parts, leaving conceptually less transistors for raw performance. Its very hard (i.e. practically impossible) to get GPU timing for a draw call on TBDR, making profiling hard.
Further reading
- PowerVR SGX Architecture Guide http://imgtec.com/powervr/insider/powervr-sdk-docs.asp
Screen resolution
iOS version
Dynamic Objects
Asset Bundles
- Asset Bundles are cached on a device to a certain limit
- Create using the Editor API
- Load
- Using WWW API: WWW.LoadFromCacheOrDownload
- As a resource: AssetBundle.CreateFromMemory or AssetBundle.CreateFromFile
- Unload
- AssetBundle.Unload
- There is an option to unload the bundle, but keep the loaded asset from it
- Also can kill all the loaded assets even if theyre referenced in the scene
- Resources.UnloadUnusedAssets
- Unloads all assets no longer referenced in the scene. So remember to kill references to the assets you dont need.
- Public and static variables are never garbage collected.
- Resources.UnloadAsset
- Unloads a specific asset from memory. It can be reloaded from disk if needed.
- AssetBundle.Unload
Is there any limitation for download numbers of Assetbundle at the same time on iOS? (e.g Can we download over 10 assetbundles safely at the same time(or every frame)? )
Downloads are implemented via async API provided by OS, so OS decides how many threads need to be created for downloads. When launching multiple concurrent downloads you should keep in mind total device bandwidth it can support and amount of free memory. Each concurrent download allocates its own temporal buffer, so you should be careful there to not run out of memory.
Resources
- Assets need to be recognized by Unity to be placed in a build.
- Add .bytes file extension to any raw bytes you want Unity to recognize as a binary data.
- Add .txt file extension to any text files you want Unity to recognize as a text asset
- Resources are converted to a platform format at a build time.
- Resources.Load()
Silly issues checklist
- Textures without proper compression
- Different solutions for different cases, but be sure to compress textures unless youre sure you should not.
- ETC/RGBA16 - default for android
- but can tweak depending on the GPU vendor
- best approach is to use ETC where possible
- alpha textures can use two ETC files with one channel being for alpha
- PVRTC - default for iOS
- good for most cases
- Textures having Get/Set pixels enabled - doubles the footprint, uncheck unless Get/Set is needed
- Textures loaded from JPEG/PNGs on the runtime will be uncompressed
- Big mp3 files marked as decompress on load
- Additive scene loading
- Unused Assets that remain uncleaned in memory
- Static fields
- not unloaded asset bundles
- If it randomly crashes, try on a devkit or a device with 2 GB memory (like Ipad 3).
Sometimes theres nothing in the console, just a random crash
- Fast script call and stripping may lead to random crashes on iOS. Try without them.
Advanced
- Vector Cookbook
- AssetBundles (Pro only)
- Graphics Features
- AssetDatabase
- Build Player Pipeline
- Profiler (Pro only)
- Lightmapping Quickstart
- Occlusion Culling (Pro only)
- Camera Tricks
- Loading Resources at Runtime
- Modifying Source Assets Through Scripting
- メッシュ ジオメトリの手順的生成
- Rich Text
- Using Mono DLLs in a Unity Project
- イベント関数の実行順
- Practical Guide to Optimization for Mobiles
- Practical Guide to Optimization for Mobiles - Future & High End Devices
- Practical Guide to Optimization for Mobiles - Graphics Methods
- Practical Guide to Optimization for Mobiles - Scripting and Gameplay Methods
- Practical Guide to Optimization for Mobiles - Rendering Optimizations
- Practical Guide to Optimization for Mobiles - Optimizing Scripts
- Optimizing Graphics Performance
- ファイル サイズの削減
- Understanding Automatic Memory Management
- Platform Dependent Compilation
- Generic Functions
- デバッギング
- Plugins (Pro/Mobile-Only Feature)
- Textual Scene File Format (Pro-only Feature)
- Streaming Assets
- Command line arguments
- Running Editor Script Code on Launch
- ネットワーク エミュレーション
- Security Sandbox of the Webplayer
- Overview of available .NET Class Libraries
- Visual Studio C# 統合
- Using External Version Control Systems with Unity
- Analytics
- アップデートのチェック
- Installing Multiple Versions of Unity
- Trouble Shooting
- Shadows in Unity
Vector Cookbook
Vector Cookbook
Although vector operations are easy to easy to describe, they are surprisingly subtle and powerful and have many uses in games programming. The following pages offer some suggestions about using vectors effectively in your code.
- Understanding Vector Arithmetic
- Direction and Distance from One Object to Another
- Computing a Normal/Perpendicular vector
- The Amount of One Vector's Magnitude that Lies in Another Vector's Direction
UnderstandingVectorArithmetic
Vector arithmetic is fundamental to 3D graphics, physics and animation and it is useful to understand it in depth to get the most out of Unity. Below are descriptions of the main operations and some suggestions about the many things they can be used for.
Addition
When two vectors are added together, the result is equivalent to taking the original vectors as "steps", one after the other. Note that the order of the two parameters doesn't matter, since the result is the same either way.

If the first vector is taken as a point in space then the second can be interpreted as an offset or "jump" from that position. For example, to find a point 5 units above a location on the ground, you could use the following calculation:-
var pointInAir = pointOnGround + new Vector3(0, 5, 0);
If the vectors represent forces then it is more intuitive to think of them in terms of their direction and magnitude (the magnitude indicates the size of the force). Adding two force vectors results in a new vector equivalent to the combination of the forces. This concept is often useful when applying forces with several separate components acting at once (eg, a rocket being propelled forward may also be affected by a crosswind).
Subtraction
Vector subtraction is most often used to get the direction and distance from one object to another. Note that the order of the two parameters does matter with subtraction:-

// The vector d has the same magnitude as c but points in the opposite direction. var c = b - a; var d = a - b;
As with numbers, adding the negative of a vector is the same as subtracting the positive.
// These both give the same result. var c = a - b; var c = a + -b;
The negative of a vector has the same magnitude as the original and points along the same line but in the exact opposite direction.
Scalar Multiplication and Division
When discussing vectors, it is common to refer to an ordinary number (eg, a float value) as a scalar. The meaning of this is that a scalar only has "scale" or magnitude whereas a vector has both magnitude and direction.
Multiplying a vector by a scalar results in a vector that points in the same direction as the original. However, the new vector's magnitude is equal to the original magnitude multiplied by the scalar value.
Likewise, scalar division divides the original vector's magnitude by the scalar.
These operations are useful when the vector represents a movement offset or a force. They allow you to change the magnitude of the vector without affecting its direction.
When any vector is divided by its own magnitude, the result is a vector with a magnitude of 1, which is known as a normalized vector. If a normalized vector is multiplied by a scalar then the magnitude of the result will be equal to that scalar value. This is useful when the direction of a force is constant but the strength is controllable (eg, the force from a car's wheel always pushes forwards but the power is controlled by the driver).
Dot Product
The dot product takes two vectors and returns a scalar. This scalar is equal to the magnitudes of the two vectors multiplied together and the result multiplied by the cosine of the angle between the vectors. When both vectors are normalized, the cosine essentially states how far the first vector extends in the second's direction (or vice-versa - the order of the parameters doesn't matter).

It is easy enough to think in terms of angles and then find the corresponding cosines using a calculator. However, it is useful to get an intuitive understanding of some of the main cosine values as shown in the diagram below:-

The dot product is a very simple operation that can be used in place of the Mathf.Cos function or the vector magnitude operation in some circumstances (it doesn't do exactly the same thing but sometimes the effect is equivalent). However, calculating the dot product function takes much less CPU time and so it can be a valuable optimization.
Cross Product
The other operations are defined for 2D and 3D vectors and indeed vectors with any number of dimensions. The cross product, by contrast, is only meaningful for 3D vectors. It takes two vectors as input and returns another vector as its result.
The result vector is perpendicular to the two input vectors. The "left hand rule" can be used to remember the direction of the output vector from the ordering of the input vectors. If the first parameter is matched up to the thumb of the hand and the second parameter to the forefinger, then the result will point in the direction of the middle finger. If the order of the parameters is reversed then the resulting vector will point in the exact opposite direction but will have the same magnitude.

The magnitude of the result is equal to the magnitudes of the input vectors multiplied together and then that value multiplied by the sine of the angle between them. Some useful values of the sine function are shown below:-

The cross product can seem complicated since it combines several useful pieces of information in its return value. However, like the dot product, it is very efficient mathematically and can be used to optimize code that would otherwise depend on slow transcendental functions.
Page last updated: 2011-08-26DirectionDistanceFromOneObjectToAnother
If one point in space is subtracted from another then the result is a vector that "points" from one object to the other:
// Gets a vector that points from the player's position to the target's. var heading = target.position - player.position;
As well as pointing in the direction of the target object, this vector's magnitude is equal to the distance between the two positions. It is common to need a normalized vector giving the direction to the target and also the distance to the target (say for directing a projectile). The distance between the objects is equal to the magnitude of the heading vector and this vector can be normalized by dividing it by its magnitude:-
var distance = heading.magnitude; var direction = heading / distance; // This is now the normalized direction.
This approach is preferable to using the both the magnitude and normalized properties separately, since they are both quite CPU-hungry (they both involve calculating a square root).
If you only need to use the distance for comparison (for a proximity check, say) then you can avoid the magnitude calculation altogether. The sqrMagnitude property gives the square of the magnitude value, and is calculated like the magnitude but without the time-consuming square root operation. Rather than compare the magnitude against a known distance, you can compare the squared magnitude against the squared distance:-
if (heading.sqrMagnitude < maxRange * maxRange) {
// Target is within range.
}
This is much more efficient than using the true magnitude in the comparison.
Sometimes, the overground heading to a target is required. For example, imagine a player standing on the ground who needs to approach a target floating in the air. If you subtract the player's position from the target's then the resulting vector will point upwards towards the target. This is not suitable for orienting the player's transform since he will also point upwards; what is really needed is a vector from the player's position to the position on the ground directly below the target. This is easily obtained by taking the result of the subtraction and setting the Y coordinate to zero:-
var heading = target.position - player.position; heading.y = 0; // This is the overground heading.Page last updated: 2011-08-26
ComputingNormalPerpendicularVector
A normal vector (ie, a vector perpendicular to a plane) is required frequently during mesh generation and may also be useful in path following and other situations. Given three points in the plane, say the corner points of a mesh triangle, it is easy to find the normal. Pick any of the three points and then subtract it from each of the two other points separately to give two vectors:-

var a: Vector3; var b: Vector3; var c: Vector3; var side1: Vector3 = b - a; var side2: Vector3 = c - a;
The cross product of these two vectors will give a third vector which is perpendicular to the surface. The "left hand rule" can be used to decide the order in which the two vectors should be passed to the cross product function. As you look down at the top side of the surface (from which the normal will point outwards) the first vector should sweep around clockwise to the second:-
var perp: Vector3 = Vector3.Cross(side1, side2);
The result will point in exactly the opposite direction if the order of the input vectors is reversed.
For meshes, the normal vector must also be normalized. This can be done with the normalized property, but there is another trick which is occasionally useful. You can also normalize the perpendicular vector by dividing it by its magnitude:-
var perpLength = perp.magnitude; perp /= perpLength;
It turns out that the area of the triangle is equal to perpLength / 2. This is useful if you need to find the surface area of the whole mesh or want to choose triangles randomly with probability based on their relative areas.
Page last updated: 2011-08-26AmountVectorMagnitudeInAnotherDirection
A car's speedometer typically works by measuring the rotational speed of one of the unpowered wheels. The car may not be moving directly forward (it may be skidding sideways, for example) in which case part of the motion will not be in the direction the speedometer can measure. The magnitude of an object's rigidbody.velocity vector will give the speed in its direction of overall motion but to isolate the speed in the forward direction, you should use the dot product:-
var fwdSpeed = Vector3.Dot(rigidbody.velocity, transform.forward);
Naturally, the direction can be anything you like but the direction vector must always be normalized for this calculation. Not only is the result more correct than the magnitude of the velocity, it also avoids the slow square root operation involved in finding the magnitude.
Page last updated: 2011-08-26AssetBundles
AssetBundles are files which you can export from Unity to contain assets of your choice. These files use a proprietary compressed format and can be loaded on demand by your application. This allows you to stream in content, such as models, textures, audio clips, or even entire scenes separately from the scene in which they will be used. AssetBundles have been designed to simplify downloading content to your application. AssetBundles can contain any kind of asset type recognized by Unity, as determined by the filename extension. If you want to include files with custom binary data, they should have the extension ".bytes". Unity will import these files as TextAssets.
When working with AssetBundles, here's the typical workflow:
During development, the developer prepares AssetBundles and uploads them to a server.
Building and uploading asset bundles
- Building AssetBundles. Asset bundles are created in the editor from assets in your scene. The Asset Bundle building process is described in more detail in the section for Building AssetBundles
- Uploading AssetBundles to external storage. This step does not include the Unity Editor or any other Unity channels, but we include it for completeness. You can use an FTP client to upload your Asset Bundles to the server of your choice.
At runtime, on the user's machine, the application will load AssetBundles on demand and operate individual assets within each AssetBundle as needed.
Downloading AssetBundles and loading assets from them
- Downloading AssetBundles at runtime from your application. This is done from script within a Unity scene, and Asset Bundles are loaded from the server on demand. More on that in Downloading Asset Bundles.
- Loading objects from AssetBundles. Once the AssetBundle is downloaded, you might want to access its individual Assets from the Bundle. More on that in Loading Resources from AssetBundles
See also:
- Frequently Asked Questions
- Building AssetBundles
- Downloading Asset Bundles
- Loading Asset Bundles
- Keeping track of loaded AssetBundles
- Storing and loading binary data
- Protecting content
- Managing Asset Dependencies
- Including scripts in AssetBundles
Frequently Asked Questions
- What are AssetBundles?
- What are they used for?
- How do I create an AssetBundle?
- How do I use an AssetBundle?
- How do I use AssetBundles in the Editor?
- How do I cache AssetBundles?
- Are AssetBundles cross-platform?
- How are assets in AssetBundles identified
- Can I reuse my AssetBundles in another game?
- Will an AssetBundle built now be usable with future versions of Unity?
- How can I list the objects in an AssetBundle?
AssetBundles are a collection of assets, packaged for loading at runtime. With Asset Bundles, you can dynamically load and unload new content into your application. AssetBundles can be used to implement post-release DLC.
They can be used to reduce the amount of space on disk used by your game, when first deployed. It can also be used to add new content to an already published game.
To create an AssetBundle you need to use the BuildPipeline editor class. All scripts using Editor classes must be placed in a folder named Editor, anywhere in the Assets folder. Here is an example of such a script in C#:
+ Show [Creating an AssetBundle] +There are two main steps involved when working with AssetBundles. The first step is to download the AssetBundle from a server or disk location. This is done with the WWW class. The second step is to load the Assets from the AssetBundle, to be used in the application. Here is an example C# script:
+ Show [Using an AssetBundle] +As creating applications is an iterative process, you will very likely modify your Assets many times, which would require rebuilding the AssetBundles after every change to be able to test them. Even though it is possible to load AssetBundles in the Editor, that is not the recommended workflow. Instead, while testing in the Editor you should use the helper function Resources.LoadAssetAtPath to avoid having to use and rebuild AssetBundles. The function lets you load the Asset as if it were being loaded from an AssetBundle, but will skip the building process and your Assets are always up to date.
The following is an example helper script, that you can use to load your Assets depending on if you are running in the Editor or not. Put this code in C# script named AssetBundleLoader.cs:
+ Show [Using an AssetBundle in the Editor] +You can use WWW.LoadFromCacheOrDownload which automatically takes care of saving your AssetBundles to disk. Be aware that on the Webplayer you are limited to 50MB in total (shared between all webplayers). You can buy a separate caching license for your game if you require more space.
AssetBundles are compatible between some platforms. Use the following table as a guideline.
| Platform compatibility for AssetBundles | |||||
| Standalone | Webplayer | iOS | Android | ||
| Editor | Y | Y | Y | Y | |
| Standalone | Y | Y | |||
| Webplayer | Y | Y | |||
| iOS | Y | ||||
| Android | Y | ||||
For example, a bundle created while the Webplayer build target was active would be compatible with the editor and with standalone builds. However, it would not be compatible with apps built for the iOS or Android platforms.
When you build AssetBundles the assets are identified internally by their filename without the extension. For example a Texture located in your Project folder at "Assets/Textures/myTexture.jpg" is identified and loaded using "myTexture" if you use the default method. You can have more control over this by supplying your own array of ids (strings) for each object when Building your AssetBundle with BuildPipeline.BuildAssetBundleExplicitAssetNames.
AssetBundles allow you to share content between different games. The requirement is that any Assets which are referenced by GameObjects in your AssetBundle must either be included in the AssetBundle or exist in the application (loaded in the current scene). To make sure the referenced Assets are included in the AssetBundle when they are built you can pass the BuildAssetBundleOptions.CollectDependencies option.
AssetBundles can contain a structure called a type tree which allows information about asset types to be understood correctly between different versions of Unity. On desktop platforms, the type tree is included by default but can be disabled by passing the BuildAssetBundleOptions.DisableWriteTypeTree to the BuildAssetBundle function. Webplayers intrinsically rely on the type tree and so it is always included (ie, the DisableWriteTypeTree option has no effect). Type trees are never included for mobile and console asset bundles and so you will need to rebuild these bundles whenever the serialization format changes. This can happen in new versions of Unity. (Except for bugfix releases) It also happens if you add or remove serialized fields in monobehaviour's that are included in the asset bundle. When loading an AssetBundle Unity will give you an error message if the AssetBundle must be rebuilt.
- How can I list the objects in an AssetBundle?
You can use AssetBundle.LoadAll to retrieve an array containing all objects from the AssetBundle. It is not possible to get a list of the identifiers directly. A common workaround is to keep a separate TextAsset to hold the names of the assets in the AssetBundle.
Page last updated: 2012-09-14Building AssetBundles
There are three class methods you can use to build AssetBundles:
- BuildPipeline.BuildAssetBundle allows you to build AssetBundles of any type of asset.
- BuildPipeline.BuildStreamedSceneAssetBundle is used when you want to include only scenes to be streamed and loaded as the data becomes available.
- BuildPipeline.BuildAssetBundleExplicitAssetNames is the same as BuildPipeline.BuildAssetBundle but has an extra parameter to specify a custom string identifier (name) for each object.
An example of how to build an AssetBundle
Building asset bundles is done through editor scripting. There is basic example of this in the scripting documentation for BuildPipeline.BuildAssetBundle.
For the sake of this example, copy and paste the script from the link above into a new C# script called ExportAssetBundles. This script should be placed in a folder named Editor, so that it works inside the Unity Editor.

Now in the menu, you should see two new menu options.

- . This will build the current object into an asset bundle and include all of its dependencies. For example if you have a prefab that consists of several hierarchical layers then it will recursively add all the child objects and components to the asset bundle.
- . This is the opposite of the previous and will only include the single asset you have selected.
For this example, you should create a new prefab. First create a new Cube by going to , which will create a new cube in the Hierarchy View. Then drag the Cube from the Hierarchy View into the Project View, which will create a prefab of that object.
You should then right click the Cube prefab in the project window and select . At this point you will be presented with a window to save the bundled asset. If you created a new folder called "AssetBundles" and saved the cube as Cube.unity3d, your project window will now look something like this.

At this point you can move the AssetBundle Cube.unity3d elsewhere on your local storage, or upload it to a server of your choice.
Building AssetBundles in a production enviroment
When first using AssetBundles it may seem enough to manually build them as seen in the previous example. But as a project grows in size and the number of Assets increases doing this process by hand is not efficient. A better approach is to write a function that builds all of the AssetBundles for a project. You can for example use a text file that maps Asset files to AssetBundle files.
Page last updated: 2012-09-04Downloading AssetBundles
Downloading AssetBundles
This section assumes you already learned how to build asset bundles. If you have not, please see Building AssetBundles
There are two ways to download an AssetBundle
- Non-caching: This is done using a creating a new WWW object. The AssetBundles are not cached to Unitys Cache folder in the local storage device.
- Caching: This is done using the WWW.LoadFromCacheOrDownload call. The AssetBundles are cached to Unitys Cache folder in the local storage device. The WebPlayer shared cache allows up to 50 MB of cached AssetBundles. PC/Mac Standalone applications and iOS/Android applications have a limit of 4 GB. WebPlayer applications that make use of a dedicated cache are limited to the number of bytes specified in the caching license agreement. Please refer to the scripting documentation for other platforms.
Here's an example of a non-caching download:
using System;
using UnityEngine;
using System.Collections; class NonCachingLoadExample : MonoBehaviour {
public string BundleURL;
public string AssetName;
IEnumerator Start() {
// Download the file from the URL. It will not be saved in the Cache
using (WWW www = new WWW(BundleURL)) {
yield return www;
if (www.error != null)
throw new Exception("WWW download had an error:" + www.error);
AssetBundle bundle = www.assetBundle;
if (AssetName == "")
Instantiate(bundle.mainAsset);
else
Instantiate(bundle.Load(AssetName));
// Unload the AssetBundles compressed contents to conserve memory
bundle.Unload(false);
}
}
}
The recommended way to download AssetBundles is to use WWW.LoadFromCacheOrDownload. For example:
using System;
using UnityEngine;
using System.Collections;
public class CachingLoadExample : MonoBehaviour {
public string BundleURL;
public string AssetName;
public int version;
void Start() {
StartCoroutine (DownloadAndCache());
}
IEnumerator DownloadAndCache (){
// Wait for the Caching system to be ready
while (!Caching.ready)
yield return null;
// Load the AssetBundle file from Cache if it exists with the same version or download and store it in the cache
using(WWW www = WWW.LoadFromCacheOrDownload (BundleURL, version)){
yield return www;
if (www.error != null)
throw new Exception("WWW download had an error:" + www.error);
AssetBundle bundle = www.assetBundle;
if (AssetName == "")
Instantiate(bundle.mainAsset);
else
Instantiate(bundle.Load(AssetName));
// Unload the AssetBundles compressed contents to conserve memory
bundle.Unload(false);
}
}
}
When you access the .assetBundle property, the downloaded data is extracted and the AssetBundle object is created. At this point, you are ready to load the objects contained in the bundle. The second parameter passed to LoadFromCacheOrDownload specifies which version of the AssetBundle to download. If the AssetBundle doesn't exist in the cache or has a version lower than requested, LoadFromCacheOrDownload will download the AssetBundle. Otherwise the AssetBundle will be loaded from cache.
Putting it all together
Now that the components are in place you can build a scene that will allow you to load your AssetBundle and display the contents on screen.
Final project structure
First create an empty game object by going to . Drag the CachingLoadExample script onto the empty game object you just created. Then type the the URL of your AssetBundle in the BundleURL field. As we have placed this in the project directory you can copy the file directory location and add the prefix file://, for example file://C:/UnityProjects/AssetBundlesGuide/Assets/AssetBundles/Cube.unity3d
You can now hit play in the Editor and you should see the Cube prefab being loaded from the AssetBundle.
Loading AssetBundles in the Editor
When working in the Editor requiring AssetBundles to be built and loaded can slow down the development process. For instance, if an Asset from an AssetBundle is modified this will then require the AssetBundle to be rebuilt and in a production environment it is most likely that all AssetBundles are built together and therefore making the process of updating a single AssetBundle a lengthy operation. A better approach is to have a separate code path in the Editor that will load the Asset directly instead of loading it from an AssetBundle. To do this it is possible to use Resources.LoadAssetAtPath (Editor only).
// C# Example
// Loading an Asset from disk instead of loading from an AssetBundle
// when running in the Editor
using System.Collections;
using UnityEngine;
class LoadAssetFromAssetBundle : MonoBehaviour
{
public Object Obj;
public IEnumerator DownloadAssetBundle<T>(string asset, string url, int version) where T : Object {
Obj = null;
#if UNITY_EDITOR
Obj = Resources.LoadAssetAtPath("Assets/" + asset, typeof(T));
yield return null;
#else
// Wait for the Caching system to be ready
while (!Caching.ready)
yield return null;
// Start the download
using(WWW www = WWW.LoadFromCacheOrDownload (url, version)){
yield return www;
if (www.error != null)
throw new Exception("WWW download:" + www.error);
AssetBundle assetBundle = www.assetBundle;
Obj = assetBundle.Load(asset, typeof(T));
// Unload the AssetBundles compressed contents to conserve memory
bundle.Unload(false);
}
#endif
}
}
Page last updated: 2012-08-16
Loading resources from AssetBundles
Loading and unloading objects from an AssetBundle
Having created an AssetBundle object from the downloaded data, you can load the objects contained in it using three different methods:
- AssetBundle.Load will load an object using its name identifier as a parameter. The name is the one visible in the Project view. You can optionally pass an object type as an argument to the Load method to make sure the object loaded is of a specific type.
- AssetBundle.LoadAsync works the same as the Load method described above but it will not block the main thread while the asset is loaded. This is useful when loading large assets or many assets at once to avoid pauses in your application.
- AssetBundle.LoadAll will load all the objects contained in your AssetBundle. As with AssetBundle.Load, you can optionally filter objects by their type.
To unload assets you need to use AssetBundle.Unload. This method takes a boolean parameter which tells Unity whether to unload all data (including the loaded asset objects) or only the compressed data from the downloaded bundle. If your application is using some objects from the AssetBundle and you want to free some memory you can pass false to unload the compressed data from memory. If you want to completely unload everything from the AssetBundle you should pass true which will destroy the Assets loaded from the AssetBundle.
Loading objects from an AssetBundles asynchronously
You can use the AssetBundle.LoadAsync method to load objects Asynchronously and reduce the likelihood of having hiccups in your application.
using UnityEngine;
// Note: This example does not check for errors. Please look at the example in the DownloadingAssetBundles section for more information
IEnumerator Start () {
// Start a download of the given URL
WWW www = WWW.LoadFromCacheOrDownload (url, 1);
// Wait for download to complete
yield return www;
// Load and retrieve the AssetBundle
AssetBundle bundle = www.assetBundle;
// Load the object asynchronously
AssetBundleRequest request = bundle.LoadAsync ("myObject", typeof(GameObject));
// Wait for completion
yield return request;
// Get the reference to the loaded object
GameObject obj = request.asset as GameObject;
// Unload the AssetBundles compressed contents to conserve memory
bundle.Unload(false);
}
Page last updated: 2012-08-14
Keeping track of loaded AssetBundles
Keeping Track of loaded AssetBundles
Unity will only allow you to have a single instance of a particular AssetBundle loaded at one time in your application. What this means is that you cant retrieve an AssetBundle from a WWW object if the same one has been loaded previously and has not been unloaded. In practical terms it means that when you try to access a previously loaded AssetBundle like this:
AssetBundle bundle = www.assetBundle;
the following error will be thrown
Cannot load cached AssetBundle. A file of the same name is already loaded from another AssetBundle
and the assetBundle property will return null. Since you cant retrieve the AssetBundle during the second download if the first one is still loaded, what you need to do is to either unload the AssetBundle when you are no longer using it, or maintain a reference to it and avoid downloading it if it is already in memory. You can decide the right course of action based on your needs, but our recommendation is that you unload the AssetBundle as soon as you are done loading objects. This will free the memory and you will no longer get an error about loading cached AssetBundles.
If you do want to keep track of which AssetBundles you have downloaded, you could use a wrapper class to help you manage your downloads like the following:
using UnityEngine;
using System;
using System.Collections;
using System.Collections.Generic;
static public class AssetBundleManager {
// A dictionary to hold the AssetBundle references
static private Dictionary<string, AssetBundleRef> dictAssetBundleRefs;
static AssetBundleManager (){
dictAssetBundleRefs = new Dictionary<string, AssetBundleRef>();
}
// Class with the AssetBundle reference, url and version
private class AssetBundleRef {
public AssetBundle assetBundle = null;
public int version;
public string url;
public AssetBundleRef(string strUrlIn, int intVersionIn) {
url = strUrlIn;
version = intVersionIn;
}
};
// Get an AssetBundle
public static AssetBundle getAssetBundle (string url, int version){
string keyName = url + version.ToString();
AssetBundleRef abRef;
if (dictAssetBundleRefs.TryGetValue(keyName, out abRef))
return abRef.assetBundle;
else
return null;
}
// Download an AssetBundle
public static IEnumerator downloadAssetBundle (string url, int version){
string keyName = url + version.ToString();
if (dictAssetBundleRefs.ContainsKey(keyName))
yield return null;
else {
using(WWW www = WWW.LoadFromCacheOrDownload (url, version)){
yield return www;
if (www.error != null)
throw new Exception("WWW download:" + www.error);
AssetBundleRef abRef = new AssetBundleRef (url, version);
abRef.assetBundle = www.assetBundle;
dictAssetBundleRefs.Add (keyName, abRef);
}
}
}
// Unload an AssetBundle
public static void Unload (string url, int version, bool allObjects){
string keyName = url + version.ToString();
AssetBundleRef abRef;
if (dictAssetBundleRefs.TryGetValue(keyName, out abRef)){
abRef.assetBundle.Unload (allObjects);
abRef.assetBundle = null;
dictAssetBundleRefs.Remove(keyName);
}
}
}
An example usage of the class would be:
using UnityEditor;
class ManagedAssetBundleExample : MonoBehaviour {
public string url;
public int version;
AssetBundle bundle;
void OnGUI (){
if (GUILayout.Label ("Download bundle"){
bundle = AssetBundleManager.getAssetBundle (url, version);
if(!bundle)
StartCoroutine (DownloadAB());
}
}
IEnumerator DownloadAB (){
yield return StartCoroutine(AssetBundleManager.downloadAssetBundle (url, version));
bundle = AssetBundleManager.getAssetBundle (url, version);
}
void OnDisable (){
AssetBundleManager.Unload (url, version);
}
}
Please bear in mind, that the AssetBundleManager class in this example is static, and any AssetBundles that you are referencing will not be destroyed when loading a new scene. Use this class as a guide but as recommended initially it is best if you unload AssetBundles right after they have been used. You can always clone a previously Instantiated object, removing the need to load the AssetBundles again.
Page last updated: 2012-05-12Storing and loading binary data
The first step is to save your binary data file with the ".bytes" extension. Unity will treat this file as a TextAsset. As a TextAsset the file can be included when you build your AssetBundle. Once you have downloaded the AssetBundle in your application and loaded the TextAsset object, you can use the .bytes property of the TextAsset to retrieve your binary data.
string url = "http://www.mywebsite.com/mygame/assetbundles/assetbundle1.unity3d";
IEnumerator Start () {
// Start a download of the given URL
WWW www = WWW.LoadFromCacheOrDownload (url, 1);
// Wait for download to complete
yield return www;
// Load and retrieve the AssetBundle
AssetBundle bundle = www.assetBundle;
// Load the TextAsset object
TextAsset txt = bundle.Load("myBinaryAsText", typeof(TextAsset)) as TextAsset;
// Retrieve the binary data as an array of bytes
byte[] bytes = txt.bytes;
}
Page last updated: 2012-05-12
Protecting content
Whilst it is possible to use encryption to secure your Assets as they are being transmitted, once the data is in the hands of the client it is possible to find ways to grab the content from them. For instance, there are tools out there which can record 3D data at the driver level, allowing users to extract models and textures as they are sent to the GPU. For this reason, our general stance is that if users are determined to extract your assets, they will be able to.
However, it is possible for you to use your own data encryption on AssetBundle files if you still want to.
One way to do this is making use of the TextAsset type to store your data as bytes. You can encrypt your data files and save them with a .bytes extension, which Unity will treat as a TextAsset type. Once imported in the Editor the files as TextAssets can be included in your AssetBundle to be placed in a server. In the client side the AssetBundle would be downloaded and the content decrypted from the bytes stored in the TextAsset. With this method the AssetBundles are not encrypted, but the data stored which is stored as TextAssets is.
string url = "http://www.mywebsite.com/mygame/assetbundles/assetbundle1.unity3d";
IEnumerator Start () {
// Start a download of the encrypted assetbundle
WWW www = new WWW.LoadFromCacheOrDownload (url, 1);
// Wait for download to complete
yield return www;
// Load the TextAsset from the AssetBundle
TextAsset textAsset = www.assetBundle.Load("EncryptedData", typeof(TextAsset));
// Get the byte data
byte[] encryptedData = textAsset.bytes;
// Decrypt the AssetBundle data
byte[] decryptedData = YourDecryptionMethod(encryptedData);
// Use your byte array. The AssetBundle will be cached
}
An alternative approach is to fully encrypt the AssetBundles from source and then download them using the WWW class. You can give them whatever file extension you like as long as your server serves them up as binary data. Once downloaded you would then use your decryption routine on the data from the .bytes property of your WWW instance to get the decrypted AssetBundle file data and create the AssetBundle from memory using AssetBundle.CreateFromMemory.
string url = "http://www.mywebsite.com/mygame/assetbundles/assetbundle1.unity3d";
IEnumerator Start () {
// Start a download of the encrypted assetbundle
WWW www = new WWW (url);
// Wait for download to complete
yield return www;
// Get the byte data
byte[] encryptedData = www.bytes;
// Decrypt the AssetBundle data
byte[] decryptedData = YourDecryptionMethod(encryptedData);
// Create an AssetBundle from the bytes array
AssetBundle bundle = AssetBundle.CreateFromMemory(decryptedData);
// You can now use your AssetBundle. The AssetBundle is not cached.
}
The advantage of this latter approach over the first one is that you can use any method (except AssetBundles.LoadFromCacheOrDownload) to transmit your bytes and the data is fully encrypted - for example sockets in a plugin. The drawback is that it won't be Cached using Unity's automatic caching. You can in all players except the WebPlayer store the file manually on disk and load it using AssetBundles.CreateFromFile
A third approach would combine the best of both approaches and store an AssetBundle itself as a TextAsset, inside another normal AssetBundles. The unencrypted AssetBundle containing the encrypted one would be cached. The original AssetBundle could then be loaded into memory, decrypted and instantiated using AssetBundle.CreateFromMemory.
string url = "http://www.mywebsite.com/mygame/assetbundles/assetbundle1.unity3d";
IEnumerator Start () {
// Start a download of the encrypted assetbundle
WWW www = new WWW.LoadFromCacheOrDownload (url, 1);
// Wait for download to complete
yield return www;
// Load the TextAsset from the AssetBundle
TextAsset textAsset = www.assetBundle.Load("EncryptedData", typeof(TextAsset));
// Get the byte data
byte[] encryptedData = textAsset.bytes;
// Decrypt the AssetBundle data
byte[] decryptedData = YourDecryptionMethod(encryptedData);
// Create an AssetBundle from the bytes array
AssetBundle bundle = AssetBundle.CreateFromMemory(decryptedData);
// You can now use your AssetBundle. The wrapper AssetBundle is cached
}
Page last updated: 2012-09-04
Managing Asset Dependencies
Any given asset in a bundle may depend on other assets. For example, a model may incorporate materials which in turn make use of textures and shaders. It is possible to include all an asset's dependencies along with it in its bundle. However, several assets from different bundles may all depend on a common set of other assets (eg, several different models of buildings may use the same brick texture). If a separate copy of a shared dependency is included in each bundle that has objects using it, then redundant instances of the assets will be created when the bundles are loaded. This will result in wasted memory.
To avoid such wastage, it is possible to separate shared dependencies out into a separate bundle and simply reference them from any bundles with assets that need them. First, the referencing feature needs to be enabled with a call to BuildPipeline.PushAssetDependencies. Then, the bundle containing the referenced dependencies needs to be built. Next, another call to PushAssetDependencies should be made before building the bundles that reference the assets from the first bundle. Additional levels of dependency can be introduced using further calls to PushAssetDependencies. The levels of reference are stored on a stack, so it is possible to go back a level using the corresponding BuildPipeline.PopAssetDependencies function. The push and pop calls need to be balanced including the initial push that happens before building.
At runtime, you need to load a bundle containing dependencies before any other bundle that references them. For example, you would need to load a bundle of shared textures before loading a separate bundle of materials that reference those textures.
Note that if you anticipate needing to rebuild asset bundles that are part of a dependency chain then you should build them with the BuildAssetBundleOptions.DeterministicAssetBundle option enabled. This guarantees that the internal ID values used to identify assets will be the same each time the bundle is rebuilt.
Page last updated: 2012-05-12Including scripts in AssetBundles
AssetBundles can contain scripts as TextAssets but as such they will not be actual executable code. If you want to include code in your AssetBundles that can be executed in your application it needs to be pre-compiled into an assembly and loaded using the Mono Reflection class (Note: Reflection is not available on iOS). You can create your assemblies in any normal C# IDE (e.g. Monodevelop, Visual Studio) or any text editor using the mono/.net compilers.
string url = "http://www.mywebsite.com/mygame/assetbundles/assetbundle1.unity3d";
IEnumerator Start () {
// Start a download of the given URL
WWW www = WWW.LoadFromCacheOrDownload (url, 1);
// Wait for download to complete
yield return www;
// Load and retrieve the AssetBundle
AssetBundle bundle = www.assetBundle;
// Load the TextAsset object
TextAsset txt = bundle.Load("myBinaryAsText", typeof(TextAsset)) as TextAsset;
// Load the assembly and get a type (class) from it
var assembly = System.Reflection.Assembly.Load(txt.bytes);
var type = assembly.GetType("MyClassDerivedFromMonoBehaviour");
// Instantiate a GameObject and add a component with the loaded class
GameObject go = new GameObject();
go.AddComponent(type);
}
Page last updated: 2012-05-12
Graphics Features
- HDR (High Dynamic Range) Rendering in Unity
- Rendering Paths
- Linear Lighting (Pro Only)
- Level Of Detail(Unity Proのみ)
- Shaders
- Using DirectX 11 in Unity 4
- Compute Shaders
- Graphics Emulation
HDR
In standard rendering, the red, green and blue values for a pixel are each represented by a fraction in the range 0..1, where 0 represents zero intensity and 1 represents the maximum intensity for the display device. While this is straightforward to use, it doesn't accurately reflect the way that lighting works in a real life scene. The human eye tends to adjust to local lighting conditions, so an object that looks white in a dimly lit room may in fact be less bright than an object that looks grey in full daylight. Additionally, the eye is more sensitive to brightness differences at the low end of the range than at the high end.
More convincing visual effects can be achieved if the rendering is adapted to let the ranges of pixel values more accurately reflect the light levels that would be present in a real scene. Although these values will ultimately need to be mapped back to the range available on the display device, any intermediate calculations (such as Unity's image effects) will give more authentic results. Allowing the internal representation of the graphics to use values outside the 0..1 range is the essence of High Dynamic Range (HDR) rendering.
Working with HDR
HDR is enabled separately for each camera using a setting on the Camera component:-

When HDR is active, the scene is rendered into an HDR image buffer which can accommodate pixel values outside the 0..1 range. This buffer is then postprocessed using image effects such as HDR bloom. The tonemapping image effect is what converts the HDR image into the standard low dynamic range (LDR) image to be sent for display. The conversion to LDR must be applied at some point in the image effect pipeline but it need not be the final step if LDR-only image effects are to be applied afterwards. For convenience, some image effects can automatically convert to LDR after applying an HDR effect (see Scripting below).
Tonemapping
Tonemapping is the process of mapping HDR values back into the LDR range. There are many different techniques, and what is good for one project may not be the best for another. A variety of tonemapping image effects have been included in Unity. To use them select Assets -> Import Package -> Image Effects (Pro Only) select the camera in the scene then select Component -> Image Effects ->ToneMapping a detailed description of the tonemapping types can be found in the image effects documentation.

An exceptionally bright scene rendered in HDR. Without tonemapping, most pixels seem out of range.

The same scene as above. But this time, the tonemapping effect is bringing most intensities into a more plausible range. Note that adaptive tonemapping can even blend between above and this image thus simulating the adaptive nature of capturing media (e.g. eyes, cameras).
HDR Bloom and Glow
Using HDR allows for much more control in post processing. LDR bloom has an unfortunate side effect of blurring many areas of a scene even if their pixel intensity is less than 1.0. By using HDR it is possible to only bloom areas where the intensity is greater than one. This leads to a much more desiarable outcome with only super bright elements of a scene bleeding into neighboring pixels. The built in 'Bloom and Lens Flares' image effect now also supports HDR. To attach it to a camera select Assets -> Import Package -> Image Effects (Pro Only) select the camera in the scene then select Component -> Image Effects ->Bloom a detailed description of the 'Bloom' effect can be found in the image effects documentation.

The car window sun reflections in this scene have intensity values far bigger than 1.0. Bloom can only pick up and glow these parts if the camera is HDR enabled thus capturing these intensities.

The car window will remain without glow if the camera is not HDR enabled. Only way to add glow is to lower the intensity threshhold but then unwanted parts of the image will start glowing as well.
Advantages of HDR
- Colors not being lost in high intensity areas
- Better bloom and glow support
- Reduction of banding in low frequency lighting areas
Disadvantages of HDR
- Uses Floating Point buffers (rendering is slower and requires more VRAM)
- No hardware anti-aliasing support (but you can use Anti-Aliasing image effect to smooth out the edges)
- Not supported on all hardware
Usage notes
Forward Rendering
In forward rendering mode HDR is only supported if you have an image effect present. This is due to performance considerations. If you have no image effect present then no tone mapping will exist and intensity truncation will occur. In this situation the scene will be rendered directly to the backbuffer where HDR is not supported.
Deferred Rendering
In HDR mode the light prepass buffer is also allocated as a floating point buffer. This reduces banding in the lighting buffer. HDR is supported in deferred rendering even if no image effects are present.
Scripting
The ImageEffectTransformsToLDR attribute can be added to an image effect script to indicate that the target buffer should be in LDR instead of HDR. Essentially, this means that a script can automatically convert to LDR after applying its HDR image effect.
Page last updated: 2012-09-05RenderingPaths
Unityは異なる Rendering Paths に対応しています。 どれを選択すべきかは、ゲームの内容やターゲット/ハードウェアに依存します。異なるRendering Pathsは異なる特徴やパフォーマンスを持ちます。(おおかたライトや影が影響するのですが)
Rendering Pathsは Player Settings で選択します。 さらに Camera でも上書き設定もできます。
もしグラフィックスカードが選択したRendering Pathsを利用出来ないのであれば、Unityは自動的に下位のパスを使います。 ですので例えばDeferred Lightingが出来ないGPUでは、Forward Renderingが選ばれるでしょう。 もしForward Renderingがサポートしていないのであれば、Vertex Litが選択されるでしょう。
Deferred Lighting
Deferred Lighting はライトと影が最も厳密にレンダリングされます。もし多くのリアルタイムライトを使いたいのであれば最適でしょう。ただ、これはハードウェアがある一定のレベルが必要なので、Unity Pro 専用で、 Mobile Devices ではサポートされていません。
さらに詳しくは Deferred Lighting page を参照して下さい。
Forward Rendering
Forward Rendering はシェーダーベースのRendering Pathsです。 これはパーピクセルライト(ノーマルマップやlight Cookiesが含まれます)をサポートし、1個のディレクショナル(一方方向)ライトからのリアルタイムシャドウをサポートします。 デフォルト設定では、パーピクセルライトモードでは最小の数のライトでレンダリングされます。 残りのライトは頂点で計算されます。
さらに詳しくは Forward Rendering page を参照して下さい。
Vertex Lit
Vertex Lit は一番軽いRendering Pathsで、リアルタイム影もサポートされていません。古いマシンでは最も使われるでしょうし、モバイルプラットフォーム限定利用されるかもしれません。
さらに詳しくは Vertex Lit page を参照して下さい。
Rendering Paths比較
| Deferred Lighting | Forward Rendering | Vertex Lit | |
| Features | |||
| パーピクセルライト (normal maps, light cookies) | Yes | Yes | - |
| リアルタイム影 | Yes | 1 Directional Light | - |
| デュアルライトマップ | Yes | - | - |
| デプス/ノーマル バッファ | Yes | Additional render passes | - |
| ソフトパーティクル | Yes | - | - |
| 半透明オブジェクト | - | Yes | Yes |
| アンチエイリアス | - | Yes | Yes |
| ライトカリングマスク | Limited | Yes | Yes |
| ライトの厳密さ | All per-pixel | Some per-pixel | All per-vertex |
| パフォーマンス | |||
| 1ピクセルあたりのライトのコスト | Number of pixels it illuminates | Number of pixels * Number of objects it illuminates | - |
| サポートしているプラットフォーム | |||
| PC (Windows/Mac) | Shader Model 3.0+ | Shader Model 2.0+ | Anything |
| Mobile (iOS/Android) | - | OpenGL ES 2.0 | OpenGL ES 2.0 & 1.1 |
| Consoles | 360, PS3 | 360, PS3 | - |
Linear Lighting
overview
Linear lighting refers to the process of illuminating a scene with all inputs being linear. Normally textures exist with gamma pre-applied to them this means that when the textures are sampled in a material that they are non linear. If these textures are used in the standard lighting equations it will lead to the result from the equation being incorrect as they expect all input to be linearized before use.
Linear lighting refers to the process of ensuring that both inputs and outputs of a shader are in the correct color space, this results in more correct lighting.
Existing (Gamma) Pipeline
In the existing rendering pipeline all colors and textures are sampled in gamma space, that is gamma correction is not removed from images or colors before they are used in a shader. Due to this a situation arises where the inputs to the shader are in gamma space, the lighting equation uses these inputs as if they were in linear space and finally no gamma correction is applied to the final pixel. Much of the time this looks acceptable as the two wrongs go some way to cancelling each other out. But it is not correct.
Linear Lighting Pipeline
If linear lighting is enabled inputs to the shader program are supplied with the gamma correction removed from them. For colors this conversion is applied implicitly if you are in linear space. Textures are sampled using hardware sRGB reads, the source texture is supplied in gamma space and then on sampling in the graphics hardware the result is converted automatically. These inputs are then supplied to the shader and lighting occurs as it normally would. The resultant value is then written to the framebuffer. This value will either be gamma corrected and written to the framebuffer, of left in linear space for later gamma correction; this depends on the current rendering configuration.
Differences Between Linear and Gamma Lighting
When using linear lighting input values to the lighting equations are different than in gamma space. This means that as lights striking surfaces will have a different response curve to what the existing Unity rendering pipeline has.
Light Falloff
The falloff from distance and normal based lighting is changed in two ways. Firstly when rendering in linear mode the additional gamma correct that is performed will make light radius' appear larger. Secondly lighting edges will also be harsher. This more correctly models lighting intensity falloff on surfaces.

Linear Intensity Response
When you are using gamma space lighting the colors and textures that are supplied to a shader have a gamma correction applied to them. When they are used in a shader the colors of high luminance are actually brighter then they should be for linear lighting. This means that as light intensity increases the surface will get brighter in a non linear way. This leads to lighting that can be too bright in many places, and can also give models and scenes a washed out feel. When you are using linear lighting, as light intensity increases the response from the surface remains linear. This leads to much more realistic surface shading and a much nicer color response in the surface.

Infinite, 3D Head Scan by Lee Perry-Smith is licensed under a Creative Commons Attribution 3.0 Unported License. Available from: http://www.ir-ltd.net/infinite-3d-head-scan-released
Linear and Gamma Blending
When performing blending into the framebuffer the blending occurs in the color space or the framebuffer. When using gamma rendering this means that non linear colors get blended together. This is incorrect. When using linear space rendering blending occurs in linear space, this is correct and leads to expected results.

Using Linear Lighting
Linear lighting results in a different look to the rendered scene. If you author a project for linear lighting it will most likely not look correct if you change to gamma lighting. Because of this if you move to linear lighting from gamma lighting it may take some time to update the project so that it looks as good as before the switch. That being said enabling linear lighting in Unity is quite simple. The feature is implemented on a per project level and is exposed in the Player Settings which can be located at Edit -> Project Settings -> Player -> Other Settings

Lightmapping
When you are using linear lighting all lighting and textures are linearized, this means that the values that are passed to the lightmapper also need to be modified. When you switch between linear lighting and gamma lighting or back you will need to rebake lightmaps. The Unity lightmapping interface will warn you when the lightmaps are in the incorrect color space.
Supported Platforms
Linear rendering is not supported on all platforms. The build targets that currently support the feature are:
- Windows & Mac (editor, standalone, web player)
- Xbox 360
- PlayStation 3
Even though these targets support linear lighting, it is not guaranteed that the graphics hardware on the device will be able to render the scene properly. You can check the desired color space and the active supported color space by looking at QualitySettings.desiredColorSpace and QualitySettings.activeColorSpace if the desired color space is linear but the active color space is gamma then the player has fallen back to gamma space. This can be used to show a warning box telling the user that the application will not look correct for them or to force an exit from the player.
Linear and Non HDR
When not using HDR a special framebuffer type is used that supports sRGB read and sRGB write (Degamma on read, Gamma on write). This means that just like a texture the values in the framebuffer are gamma corrected. When this framebuffer is used for blending or bound as texture the values have the gamma removed before being used. When these buffers are written to the value that is being written is converted from linear space to gamma space. If you are rendering in linear mode, all post process effects will have their source and target buffers created with sRGB read and write enabled so that post process and post process blending occurs in linear space.
Linear and HDR
When using HDR, rendering is performed into floating point buffers. These buffers have enough resolution to not require conversion to an from gamma space whenever the buffer is accessed, this means that when rendering in linear mode the render targets you use will store the colors in linear space. This means that all blending and post process effects will implicitly be performed in linear space. When the the backbuffer is written to, gamma correction is applied.
GUI and Linear Authored Textures
When rendering Unity GUI we do not perform the rendering in linear space. This means that GUI textures should not have their gamma removed on read. This can be achieved in two ways.
- Set the texture type to GUI in the texture importer
- Check the 'Bypass sRGB Sampling' checkbox int the advanced texture importer
It is also important that lookup textures and other textures which are authored to have their RGB values to mean something specific should bypass sRGB sampling.
This will force the sampled texture to not have gamma removed before being used by the graphics hardware. This is also useful for other texture types such as masks where you wish the value that is passed to the shader to be the exact same value that is in the authored texture.
Page last updated: 2012-01-18Level Of Detail
シーンが大きくなるにつれて、パフォーマンスは大きな問題になります。これを管理するための方法の一つは、カメラが被写体からどれだけ離れているかに応じて、さまざまな詳細レベル(Level of Detail)でメッシュを持つことです。 これはLevel of Detail(略語:LOD)と呼ばれます。
ここでは異なるLODを持つオブジェクトをセットアップする一つの方法を説明します。
- シーンに空のGame Objectを作成します。
- メッシュを2バージョン作成し、高解像度のメッシュ(LOD:0、カメラが最も近いとき)、低解像度メッシュ(LOD:0、カメラがより遠いとき)と準備します。
- このオブジェクトにLODGroupコンポーネントを追加します(メニューから->->を選択)。
- 最初のLOD:0のRenderersボックスの上に、高解像度メッシュオブジェクトをドラッグします。"Reparent game objects?"ダイアログボックスでyesを選択します。
- 最初のLOD:1のRenderersボックスの上に、低解像度メッシュオブジェクトをドラッグします。"Reparent game objects?"ダイアログボックスでyesを選択します。
- LOD:2を右クリックして削除します。
この時点では空のオブジェクトはメッシュの両方のバージョンが含まれていて、メッシュカメラがどれだけ離れているかに応じて表示すれば良いか"知ってい??ます"。
LODGroupコンポーネントウィンドウのカメラアイコンを左右にドラッグすることで効果をプレビューすることができます。
LOD0のカメラ
LOD1のカメラ
Scene Viewで表示される内容は:
- オブジェクトが占有しているビューの割合
- 現在LODが現在表示されているか
- 三角形の数
アセット インポート パイプラインのLODベースの命名規則
LODのセットアップをシンプルに行うため、Unityはインポートするモデルの命名規則を持っています。
モデリングツールで_LOD0、_LOD1、_LOD2などで終わる名前を使用してメッシュを作成するだけで、適切な設定を使用したLODのグループが自動作成されます。
命名規則はLOD 0が最高解像度モデルであることを前提としていることに留意してください。
さまざまなプラットフォーム用のLODのセットアップ
各プラットフォーム用のLODの設定を微調整はQuality Settings の、主に とプロパティで行います。
便利なオプション
LODを使用する際に便利ないくつかのオプションがあります。
| Recalculate Bounds | LODGroupに新たに追加された物体が境界ボリュームに反映されていない場合、境界を更新するためにこれをクリックします。これが必要となる一例は、一つの物体がprefab プレハブの一部であり、新しい物体がそのプレハブに追加された場合です。LODGroupに直接追加された物体は自動的に境界が更新されます。 |
| Update Lightmaps | lightmaps のScale in LightmapプロパティをLODの境界に基づいて更新します。 |
| Upload to Importer | LOD境界をインポータにアップロードします。 |
Shaders
All rendering in Unity is done with Shaders - small scripts that let you configure the how the graphics hardware is set up for rendering. Unity ships with 60+ built-in shaders (documented in the Built-in Shader Guide). You can extend this by making your own shaders.
Shaders in Unity can be written in one of three different ways:
Surface Shaders
Surface Shaders are your best option if your shader needs to be affected by lights and shadows. Surface shaders make it easy to write complex shaders in a compact way - it's a higher level of abstraction for interaction with Unity's lighting pipeline. Most surface shaders automatically support both forward and deferred lighting. You write surface shaders in a couple of lines of Cg/HLSL and a lot more code gets auto-generated from that.
Do not use surface shaders if your shader is not doing anything with lights. For Image Effects or many special-effect shaders, surface shaders are a suboptimal option, since they will do a bunch of lighting calculations for no good reason!
Vertex and Fragment Shaders
Vertex and Fragment Shaders will be required, if your shader doesn't need to interact with lighting, or if you need some very exotic effects that the surface shaders can't handle. Shader programs written this way are the most flexible way to create the effect you need (even surface shaders are automatically converted to a bunch of vertex and fragment shaders), but that comes at a price: you have to write more code and it's harder to make it interact with lighting. These shaders are written in Cg/HLSL as well.
Fixed Function Shaders
Fixed Function Shaders need to be written for old hardware that doesn't support programmable shaders. You will probably want to write fixed function shaders as an n-th fallback to your fancy fragment or surface shaders, to make sure your game still renders something sensible when run on old hardware or simpler mobile platforms. Fixed function shaders are entirely written in a language called ShaderLab, which is similar to Microsoft's .FX files or NVIDIA's CgFX.
ShaderLab
Regardless of which type you choose, the actual meat of the shader code will always be wrapped in ShaderLab, which is used to organize the shader structure. It looks like this:
Shader "MyShader" {
Properties {
_MyTexture ("My Texture", 2D) = "white" { }
// other properties like colors or vectors go here as well
}
SubShader {
// here goes the 'meat' of your
// - surface shader or
// - vertex and fragment shader or
// - fixed function shader
}
SubShader {
// here goes a simpler version of the SubShader above that can run on older graphics cards
}
}
We recommend that you start by reading about some basic concepts of the ShaderLab syntax in the ShaderLab reference and then move on to the tutorials listed below.
The tutorials include plenty of examples for the different types of shaders. For even more examples of surface shaders in particular, you can get the source of Unity's built-in shaders from the Resources section. Unity's Image Effects package contains a lot of interesting vertex and fragment shaders.
Read on for an introduction to shaders, and check out the shader reference!
Page last updated: 2012-08-14ShaderTut1
このチュートリアルでは、自身のシェーダの作成法とゲームの見た目をよくする方法を学びます。
Unity には、ShaderLab と呼ばれる強力なシェーディングおよびマテリアル言語が用意されています。 スタイルは、CgFX および Direct3D Effects (.FX) 言語に似ています。これは、Material を表示するのに必要なものすべてを記述します。
シェーダは、各種グラフィック ハードウェア機能で対象とされる Unity の Material Inspector に表示されるプロパティと複数のシェーダ実行 (SubShaders) を記述し、それぞれが、完全なグラフィック ハードウェア レンダリング状態、固定関数パイプライン設定、使用する頂点/断片プログラムを記述します。 頂点/断片プログラムは、高レベルの Cg/HLSL プログラム言語で記述されます。
このチュートリアルでは、固定関数およびプログラム可能なパイプラインの両方を使用したシェーダの記述法について説明します。 OpenGL または Direct3D レンダー状態およびプログラム可能なパイプラインを基本的に理解しており、CgHLSL または GLSL プログラム言語に関して、ある程度の知識のある方を想定しています。 一部のシェーダのチュートリアルおよび文書は、NVIDIA および AMD 開発者サイトに置かれています。
はじめに
シェーダを新規作成するには、メニューバーから を選択するか、既存のシェーダをコピーし、そこから作業します。 sたらしいシェーダは、Project View をダブルクリックすることで編集できます。
非常に基本的なシェーダから始めましょう。
Shader "Tutorial/Basic" {
Properties {
_Color ("Main Color", Color) = (1,0.5,0.5,1)
}
SubShader {
Pass {
Material {
Diffuse [_Color]
}
Lighting On
}
}
}
この簡単なシェーダは、最も基本的なシェーダの 1 つを例示します。 これは、「Main Color」という色プロパティを定義し、これにバラのような色 (赤=100% 緑=50% 青=50% アルファ=100%). のデフォルト値を割り当てます。 次に、Pass を呼び出してオブジェクトをレンダリングし、このパスで、デフューズ マテリアル コンポーネントをプロパティ「_Color」に設定し、頂点ごとのライティングをオンにします。
このシェーダをテストするには、ドロップダウン メニューからシェーダを選択し () 、そのマテリアルをオブジェクトに割り当てます。 マテリアル インスペクタで色を微調整し、変更を確認します。 より複雑なものに移行する時間です!
基本的な頂点ライティング
既存の複雑なシェーダを開く場合、よい概要を得るのは若干難しい場合があります。 最初に、Unity に同梱の組み込み VertexLit シェーダを分解してみましょう。 このシェーダは、固定関数パイプラインを使用して、標準の頂点ごとのライティングを実行します。
Shader "VertexLit" {
Properties {
_Color ("Main Color", Color) = (1,1,1,0.5)
_SpecColor ("Spec Color", Color) = (1,1,1,1)
_Emission ("Emmisive Color", Color) = (0,0,0,0)
_Shininess ("Shininess", Range (0.01, 1)) = 0.7
_MainTex ("Base (RGB)", 2D) = "white" {}
}
SubShader {
Pass {
Material {
Diffuse [_Color]
Ambient [_Color]
Shininess [_Shininess]
Specular [_SpecColor]
Emission [_Emission]
}
Lighting On
SeparateSpecular On
SetTexture [_MainTex] {
constantColor [_Color]
Combine texture * primary DOUBLE, texture * constant
}
}
}
}
すべてのシェーダは、キーワード「Shaderで始まり、その後にシェーダの名前を表す文字列が続きます。 これは、Inspector に表示される名前です。 このシェーダのすべてのコードは、その後の波括弧内に入れる必要があるということです。 { } (ブロックと呼ばれます)。
- この名前は短く、説明的である必要があります。 これは、.shaderファイル名に一致する必要はありません。
- Unity のサブメニューにシェーダを置くには、スラッシュを使用します (例えば、「MyShaders/Test」は、 または サブメニュー内で、 と表示されます)。
このシェーダは、「SubShader」ブロックが後にくる「Properties」ブロックで構成されます。 これらについては各自後述します。
プロパティ
シェーダ ブロックの最初で、Material Inspector でアーティストが編集できるプロパティを定義できます。 「VertexLit」の例では、このプロパティは次のように見えます。

プロパティは、「Propertiesブロック内の個々の行に一覧表示されます。 各プロパティは、内部名 (「Color」、「MainTex」) で始まります。 この後の括弧内には、インスペクタで表示される名前とプロパティの型がきます。 その後、このプロパティのデフォルト値が一覧表示されます。

型のリストは、Properties Reference にあります。 デフォルト値は、プロパティのタイプによって決まります。 色の例では、デフォルト値は、4 つの成分のベクトルになるはずです。
プロパティを定義したので、実際のシェーダを記述する準備ができました。
シェーダ本体
次に進む前に、シェーダ ファイルの基本的な構造を定義しましょう。
グラフィック ハードウェアによって、機能が異なります。 例えば、グラフィック カードによって断片プログラムをサポートしているものとしていないものがあります。パスごとに 4 つのテクスチャを置くものもあれば、2 つまたは 1 つしかおかないものもあります。ユーザーの持っているハードウェアが何であろうと最大限に活用できるようにするため、シェーダは、複数の SubShaders を含むことができます。 Unity がシェーダをレンダリングすると、すべてのサブシェーダに展開し、ハードウェアがサポートする最初のサブシェーダを使用します。
Shader "Structure Example" {
Properties { /* ...shader properties... }
SubShader {
// ...頂点/断片プログラムを使用するサブシェーダ...
}
SubShader {
// ...パスごとに 4 つのテクスチャを使用するサブシェーダ...
}
SubShader {
// ...パスごとに 2 つのテクスチャを使用するサブシェーダ...
}
SubShader {
// ...見た目は悪いが、いずれでも実行するサブシェーダ :)
}
}
このシステムにより、Unity はすべての既存のハードウェアをサポートし、それぞれでの質を最大化できます。 しかし、シェーダは多少長くなります。
各サブシェーダ ブロック内で、すべてのパスが共有するレンダリング状態を設定し、レンダリング パス自体を定義します。 使用できるコマンドの完全なリストは、SubShader Reference にあります。
パス
各サブシェーダは、パスの集合です。 各パスに対して、オブジェクト ジオメトリがレンダリングされるため、少なくとも 1 つのパスが必要です。 ここでの VertexLit シェーダは、次の 1 つのパスがあります。
// ...snip...
Pass {
Material {
Diffuse [_Color]
Ambient [_Color]
Shininess [_Shininess]
Specular [_SpecColor]
Emission [_Emission]
}
Lighting On
SeparateSpecular On
SetTexture [_MainTex] {
constantColor [_Color]
Combine texture * primary DOUBLE, texture * constant
}
}
// ...スニップ...
パスで定義されたコマンドは、グラフィック ハードウェアを設定し、特定の方法でジオメトリをレンダリングします。
上記の例では、「Materialブロックは、固定関数ライティング マテリアル設定にプロパティ値を結合します。 コマンド「Lighting On」は、標準の頂点ライティングをオンにし、「SeparateSpecular On」は、スペキュラ ハイライトに対して、個々の色の使用を有効にします。
これまでのコマンドはすべて、固定関数 OpenGL/Direct3D ハードウェア モデルに対して、直接マッピングします。 詳細については、OpenGL red book を参照してください。
次のコマンド「SetTextureは非常に重要です。 これらのコマンドは、使用したいテクスチャと混合方法を定義し、結合し、レンダリングで適用します。 「SetTexture」コマンドの後には、使用したいテクスチャのプロパティ名がきます (ここでは「_MainTex」)。この後には、テクスチャの適用方法を定義する combiner block がきます。 コンバイナ ブロック内のコマンドは、画面上でレンダリングされる各ピクセルに対して実行されます。
このブロック内で、一定の色値、つまり、マテリアルの色である「_Color」を設定します。 この一定の色はを以下で使用します。
次のコマンドでは、テクスチャと色値の混合方法を指定します。 これを、テクスチャをそれぞれまたは色とブレンドする方法を指定する「Combine」コマンドで行います。 一般に以下のような感じになります。
Combine ColorPart, AlphaPart
ここでは、「ColorPart」と「AlphaPart」が、色 (RGB) とアルファ (A) 成分のブレンディングをそれぞれ定義します。 「AlphaPart」が省略されると、「ColorPart」と同じブレンディングを使用します。
頂点リットの例で、
Combine texture * primary DOUBLE, texture * constant
ここでは、「texture」は、現在のテクスチャ (ここでは「_MainTex」) から来ている色です。 これは「primary」頂点色で乗算 (*) されます。 最初の色は、頂点ライティング色で、上記マテリアル色から計算されます。 最後に、結果が 2 で乗算され、ライティング強度が増します (「DOUBLE」)。 アルファ値 (コンマの後) は、「constant」値で乗算された「texture」になります (上記の「constantColor」で設定)。 もう 1 つのよく使用されるコンバイナ モードは、「previous」と呼ばれます (このシェーダでは使用されません)。 これは、前の「SetTexture」手順の結果であり、複数のテクスチャおよび/または色を相互に結合するのに使用できます。
要約
頂点リット シェーダは、レンダリングされたライティング強度が 2 倍になるよう、標準のチュ点ライティングを設定し、テクスチャ コンバイナを設定します。
シェーダにより多くのパスを入れることができ、これらのパスは次々とレンダリングされます。 しかし、現在、必要な効果があるため、これは不要です。 高度な機能を使用しないため、必要なサブシェーダは 1 つだけです。この特定のシェーダは、Unity がサポートしているグラフィック カードで機能します。
頂点リット シェーダは、考えられる中で最も基本的なシェーダです。 ハードウェア固有の操作を使用せず、また ShaderLab および Cg が提供する特殊でクールなコマンドも使用しませんでした。
next chapter では、Cg 言語を使用したカスタムの頂点および断片プログラムの記述法の説明から始めます。
Page last updated: 2012-11-09ShaderTut2
This tutorial will teach you how to write custom vertex and fragment programs in Unity shaders. For a basic introduction to ShaderLab see the Getting Started tutorial. If you want to write shaders that interact with lighting, read about Surface Shaders instead.
Lets start with a small recap of the general structure of a shader:
Shader "MyShaderName" {
Properties {
// ... properties here ...
}
SubShader {
// ... subshader for graphics hardware A ...
Pass {
// ... pass commands ...
}
// ... more passes if needed ...
}
SubShader {
// ... subshader for graphics hardware B ...
}
// ... Optional fallback ...
FallBack "VertexLit"
}
Here at the end we introduce a new command:
FallBack "VertexLit"
The Fallback command can be used at the end of the shader; it tells which shader should be used if no SubShaders from the current shader can run on user's graphics hardware. The effect is the same as including all SubShaders from the fallback shader at the end. For example, if you were to write a normal-mapped shader, then instead of writing a very basic non-normal-mapped subshader for old graphics cards you can just fallback to built-in VertexLit shader.
The basic building blocks of the shader are introduced in the first shader tutorial while the full documentation of Properties, SubShaders and Passes are also available.
A quick way of building SubShaders is to use passes defined in other shaders. The command UsePass does just that, so you can reuse shader code in a neat fashion. As an example the following command uses the pass with the name "BASE" from the built-in Specular shader:
UsePass "Specular/BASE"
In order for UsePass to work, a name must be given to the pass one wishes to use. The Name command inside the pass gives it a name:
Name "MyPassName"
Vertex and fragment programs
We described a pass that used just a single texture combine instruction in the first tutorial. Now it is time to demonstrate how we can use vertex and fragment programs in our pass.
When you use vertex and fragment programs (the so called "programmable pipeline"), most of the hardcoded functionality ("fixed function pipeline") in the graphics hardware is switched off. For example, using a vertex program turns off standard 3D transformations, lighting and texture coordinate generation completely. Similarly, using a fragment program replaces any texture combine modes that would be defined in SetTexture commands; thus SetTexture commands are not needed.
Writing vertex/fragment programs requires a thorough knowledge of 3D transformations, lighting and coordinate spaces - because you have to rewrite the fixed functionality that is built into API's like OpenGL yourself. On the other hand, you can do much more than what's built in!
Using Cg in ShaderLab
Shaders in ShaderLab are usually written in Cg programming language by embedding "Cg snippets" in the shader text. Cg snippets are compiled into low-level shader assembly by the Unity editor, and the final shader that is included in your game's data files only contains this low-level assembly. When you select a shader in the Project View, the Inspector shows shader text after Cg compilation, which might help as a debugging aid. Unity automatically compiles Cg snippets for both Direct3D, OpenGL, Flash and so on, so your shaders will just work on all platforms. Note that because Cg code is compiled by the editor, you can't create Cg shaders from scripts at runtime.
In general, Cg snippets are placed inside Pass blocks. They look like this:
Pass {
// ... the usual pass state setup ...
CGPROGRAM
// compilation directives for this snippet, e.g.:
#pragma vertex vert
#pragma fragment frag
// the Cg code itself
ENDCG
// ... the rest of pass setup ...
}
The following example demonstrates a complete shader with Cg programs that renders object normals as colors:
Shader "Tutorial/Display Normals" {
SubShader {
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f {
float4 pos : SV_POSITION;
float3 color : COLOR0;
};
v2f vert (appdata_base v)
{
v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
o.color = v.normal * 0.5 + 0.5;
return o;
}
half4 frag (v2f i) : COLOR
{
return half4 (i.color, 1);
}
ENDCG
}
}
Fallback "VertexLit"
}
When applied on an object it will result in an image like this (if your graphics card supports vertex & fragment programs of course):

Our "Display Normals" shader does not have any properties, contains a single SubShader with a single Pass that is empty except for the Cg code. Finally, a fallback to the built-in VertexLit shader is defined. Let's dissect the Cg code part by part:
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
// ... snip ...
ENDCG
The whole Cg snippet is written between CGPROGRAM and ENDCG keywords. At the start compilation directives are given as #pragma statements:
- #pragma vertex name tells that the code contains a vertex program in the given function (vert here).
- #pragma fragment name tells that the code contains a fragment program in the given function (frag here).
Following the compilation directives is just plain Cg code. We start by including a builtin Cg file:
#include UnityCg.cginc
The UnityCg.cginc file contains commonly used declarations and functions so that the shaders can be kept smaller (see shader include files page for details). Here we'll use appdata_base structure from that file. We could just define them directly in the shader and not include the file of course.
Next we define a "vertex to fragment" structure (here named v2f) - what information is passed from the vertex to the fragment program. We pass the position and color parameters. The color will be computed in the vertex program and just output in the fragment program.
We proceed by defining the vertex program - vert function. Here we compute the position and output input normal as a color:
o.color = v.normal * 0.5 + 0.5;
Normal components are in -1..1 range, while colors are in 0..1 range, so we scale and bias the normal in the code above. Next we define a fragment program - frag function that just outputs the calculated color and 1 as the alpha component:
half4 frag (v2f i) : COLOR
{
return half4 (i.color, 1);
}
That's it, our shader is finished! Even this simple shader is very useful to visualize mesh normals.
Of course, this shader does not respond to lights at all, and that's where things get a bit more interesting; read about Surface Shaders for details.
Using shader properties in Cg code
When you define properties in the shader, you give them a name like _Color or _MainTex. To use them in Cg you just have to define a variable of a matching name and type. Unity will automatically set Cg variables that have names matching with shader properties.
Here is a complete shader that displays a texture modulated by a color. Of course, you could easily do the same in a texture combiner call, but the point here is just to show how to use properties in Cg:
Shader "Tutorial/Textured Colored" {
Properties {
_Color ("Main Color", Color) = (1,1,1,0.5)
_MainTex ("Texture", 2D) = "white" { }
}
SubShader {
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
float4 _Color;
sampler2D _MainTex;
struct v2f {
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
};
float4 _MainTex_ST;
v2f vert (appdata_base v)
{
v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
o.uv = TRANSFORM_TEX (v.texcoord, _MainTex);
return o;
}
half4 frag (v2f i) : COLOR
{
half4 texcol = tex2D (_MainTex, i.uv);
return texcol * _Color;
}
ENDCG
}
}
Fallback "VertexLit"
}
The structure of this shader is the same as in the previous example. Here we define two properties, namely _Color and _MainTex. Inside Cg code we define corresponding variables:
float4 _Color; sampler2D _MainTex;
See Accessing Shader Properties in Cg for more information.
The vertex and fragment programs here don't do anything fancy; vertex program uses the TRANSFORM_TEX macro from UnityCG.cginc to make sure texture scale&offset is applied correctly, and fragment program just samples the texture and multiplies by the color property.
Note that because we're writing our own fragment program here, we don't need any SetTexture commands. How the textures are applied in the shader is entirely controlled by the fragment program.
Summary
We have shown how custom shader programs can be generated in a few easy steps. While the examples shown here are very simple, there's nothing preventing you to write arbitrarily complex shader programs! This can help you to take the full advantage of Unity and achieve optimal rendering results.
The complete ShaderLab reference manual is here. We also have a forum for shaders at forum.unity3d.com so go there to get help with your shaders! Happy programming, and enjoy the power of Unity and Shaderlab.
Page last updated: 2012-09-04DirectX 11
Unity 4 introduces ability to use DirectX 11 graphics API, with all the goodies that you expect from it: compute shaders, tessellation shaders, shader model 5.0 and so on.
Enabling DirectX 11
To enable DirectX 11 for your game builds and the editor, set "Use DX11" option in Player Settings. Unity editor needs to be restarted for this to take effect.
Note that DX11 requires Windows Vista or later and at least a DX10-level GPU (preferably DX11-level). Unity editor window title has "<DX11>" at the end when it is actually running in DX11 mode.
Image Effects that can take advantage of DX11
- Depth of Field effect (optimized Bokeh texture splatting)
- Noise and Grain effect (higher quality noise patterns)
- Motion Blur effect (higher quality reconstruction filter)
Compute Shaders
Compute shaders allow using GPU as a massively parallel processor. See Compute Shaders page for mode details.
Tessellation & Geometry Shaders
Surface shaders have support for simple tessellation & displacement, see Surface Shader Tessellation page.
When manually writing shader programs, you can use full set of DX11 shader model 5.0 features, including geometry, hull & domain shaders.
DirectX 11 Examples
The following screenshots show examples of what becomes possible with DirectX 11.

The volumetric explosion in these shots is rendered using raymarching which becomes plausible with shader model 5.0. Moreover, as it generates and updates depth values, it becomes fully compatible with depth based image effects such as depth of field or motion blur.

The hair in this shot is implemented via DirectX 11 tessellation & geometry shaders to dynamically generate and animate individual strings of hair. Shading is based on a model proposed by Kajiya-Kai that enables a more believable diffuse and specular behaviour.

Similar to the hair technique above, the shown slippers fur is also based on generating geometry emitted from a simple base slippers mesh.

The blur effect in this image (known as Bokeh) is based on splatting a texture on top of very bright pixels. This can create very believable camera lens blurs, especially when used in conjunction with HDR rendering.

Exaggerated lens blur similar to the screenshot above. This is a possible result using the new Depth of Field effect
Page last updated: 2012-10-30Compute Shaders
Compute Shaders are programs that run on the graphics card, outside of the normal rendering pipeline. They can be used for massively parallel GPGPU algorithms, or to accelerate parts of game rendering. In order to efficiently use them, often an in-depth knowledge of GPU architectures and parallel algorithms is needed; as well as knowledge of DirectCompute, OpenCL or CUDA.
Compute shaders in Unity are built on top of DirectX 11 DirectCompute technology; and currently require Windows Vista or later and a GPU capable of Shader Model 5.0.
Compute shader assets
Similar to normal shaders, Compute Shaders are asset files in your project, with *.compute file extension. They are written in DirectX 11 style HLSL language, with minimal amount of #pragma compilation directives to indicate which functions to compile as compute shader kernels.
Here's a minimal example of a compute shader file:
// test.compute
#pragma kernel FillWithRed
RWTexture2D<float4> res;
[numthreads(1,1,1)]
void FillWithRed (uint3 dtid : SV_DispatchThreadID)
{
res[dtid.xy] = float4(1,0,0,1);
}
The example above does not do anything remotely interesting, just fills output texture with red color.
The language is standard DX11 HLSL, with the only exception of a #pragma kernel FillWithRed directive. One compute shader asset file must contain at least one "compute kernel" that can be invoked, and that function is indicated by the #pragma directive. There can be more kernels in the file; just add multiple #pragma kernel lines.
The #pragma kernel line can optionally be followed by a number of preprocessor macros to define while compiling that kernel, for example:
#pragma kernel KernelOne SOME_DEFINE DEFINE_WITH_VALUE=1337 #pragma kernel KernelTwo OTHER_DEFINE // ...
Invoking compute shaders
In your script, define a variable of ComputeShader type, assign a reference to the asset, and then you can invoke them with ComputeShader.Dispatch function. See scripting reference of ComputeShader class for more details.
Closely related to compute shaders is a ComputeBuffer class, which defines arbitrary data buffer ("structured buffer" in DX11 lingo). Render Textures can also be written into from compute shaders, if they have "random access" flag set ("unordered access view" in DX11), see RenderTexture.enableRandomWrite.
Page last updated: 2012-08-14GraphicsEmulation
You can choose to emulate less capable graphics hardware when working in the Unity editor. This is very handy when writing custom shaders and rendering effects, and is a quick way to test how your game will look on that eight year old graphics card that someone might have.
To enable Graphics emulation, go to , and choose your desired emulation level.
Note: The available graphic emulation options change depending on the platform you are currently targeting. More information can be found on the Publishing builds page.

Enabling Graphics Emulation
Technical Details
Graphics emulation limits the graphics capabilities that are supported, but it does not emulate the performance of graphics hardware. Your game in the editor will still be rendered by your graphics card, more and more features will be disabled as you reduce emulation quality.
While emulation is a quick way to check out graphics capabilities, you should still test your game on actual hardware. This will reveal real performance and any peculiarities of the specific graphics card, operating system or driver version.
Emulation Levels
Graphics emulation levels are the following:
In web player or standalone mode:
| No Emulation | No emulation is performed. |
| Shader Model 3 | Emulates graphics card with Shader Model 3.0 level capabilities. Long vertex & fragment shader programs, realtime shadows, HDR. |
| Shader Model 2 | Shader Model 2.0 capabilities. Vertex & fragment programs, realtime shadows. No HDR, maximum 4 texture combiner stages. |
| Shader Model 1 | Shader Model 1.x capabilities. Vertex programs, 4 texture combiner stages. Not supported: fragment programs, shadows, HDR, depth textures, multiple render targets. |
| DirectX 7 | DirectX 7 level capabilities. Vertex programs (usually in software mode), two texture combiner stages. Not supported: fragment programs, shadows, HDR, depth textures, 3D textures, min/max/sub blending. |
In iOS or Android mode:
| No Emulation | No emulation is performed. |
| OpenGL ES 1.x | OpenGL ES 1.1: Four texture combiner stages. Not supported: vertex or fragment programs, shadows and pretty much all other graphics features ;) |
| OpenGL ES 2.0 | OpenGL ES 2.0: Vertex & fragment programs, four texture combiner stages. Not supported: HDR, 3D textures. |
When your graphics card does not support all the capabilities of some emulation level, that level will be disabled. For example, the Intel GMA950 (Intel 915/945/3000) card does not support Shader Model 3.0, so there's no way to emulate that level.
Page last updated: 2012-08-17AssetDatabase
AssetDatabase is an API which allows you to access the assets contained in your project. Among other things, it provides methods to find and load assets and also to create, delete and modify them. The Unity Editor uses the AssetDatabase internally to keep track of asset files and maintain the linkage between assets and objects that reference them. Since Unity needs to keep track of all changes to the project folder, you should always use the AssetDatabase API rather than the filesystem if you want to access or modify asset data.
The AssetDatabase interface is only available in the editor and has no function in the built player. Like all other editor classes, it is only available to scripts placed in the Editor folder (just create a folder named Editor in the main Assets folder of your project if there isn't one already).
Importing an Asset
Unity normally imports assets automatically when they are dragged into the project but it is also possible to import them under script control. To do this you can use the AssetDatabase.ImportAsset method as in the example below.
using UnityEngine;
using UnityEditor;
public class ImportAsset {
[MenuItem ("AssetDatabase/ImportExample")]
static void ImportExample ()
{
AssetDatabase.ImportAsset("Assets/Textures/texture.jpg", ImportAssetOptions.Default);
}
}
You can also pass an extra parameter of type AssetDatabase.ImportAssetOptions to the AssetDatabase.ImportAsset call. The scripting reference page documents the different options and their effects on the function's behaviour.
Loading an Asset
The editor loads assets only as needed, say if they are added to the scene or edited from the Inspector panel. However, you can load and access assets from a script using AssetDatabase.LoadAssetAtPath, AssetDatabase.LoadMainAssetAtPath, AssetDatabase.LoadAllAssetRepresentationsAtPath and AssetDatabase.LoadAllAssetsAtPath. See the scripting documentation for further details.
using UnityEngine;
using UnityEditor;
public class ImportAsset {
[MenuItem ("AssetDatabase/LoadAssetExample")]
static void ImportExample ()
{
Texture2D t = AssetDatabase.LoadAssetAtPath("Assets/Textures/texture.jpg", typeof(Texture2D)) as Texture2D;
}
}
File Operations using the AssetDatabase
Since Unity keeps metadata about asset files, you should never create, move or delete them using the filesystem. Instead, you can use AssetDatabase.Contains, AssetDatabase.CreateAsset, AssetDatabase.CreateFolder, AssetDatabase.RenameAsset, AssetDatabase.CopyAsset, AssetDatabase.MoveAsset, AssetDatabase.MoveAssetToTrash and AssetDatabase.DeleteAsset.
public class AssetDatabaseIOExample {
[MenuItem ("AssetDatabase/FileOperationsExample")]
static void Example ()
{
string ret;
// Create
Material material = new Material (Shader.Find("Specular"));
AssetDatabase.CreateAsset(material, "Assets/MyMaterial.mat");
if(AssetDatabase.Contains(material))
Debug.Log("Material asset created");
// Rename
ret = AssetDatabase.RenameAsset("Assets/MyMaterial.mat", "MyMaterialNew");
if(ret == "")
Debug.Log("Material asset renamed to MyMaterialNew");
else
Debug.Log(ret);
// Create a Folder
ret = AssetDatabase.CreateFolder("Assets", "NewFolder");
if(AssetDatabase.GUIDToAssetPath(ret) != "")
Debug.Log("Folder asset created");
else
Debug.Log("Couldn't find the GUID for the path");
// Move
ret = AssetDatabase.MoveAsset(AssetDatabase.GetAssetPath(material), "Assets/NewFolder/MyMaterialNew.mat");
if(ret == "")
Debug.Log("Material asset moved to NewFolder/MyMaterialNew.mat");
else
Debug.Log(ret);
// Copy
if(AssetDatabase.CopyAsset(AssetDatabase.GetAssetPath(material), "Assets/MyMaterialNew.mat"))
Debug.Log("Material asset copied as Assets/MyMaterialNew.mat");
else
Debug.Log("Couldn't copy the material");
// Manually refresh the Database to inform of a change
AssetDatabase.Refresh();
Material MaterialCopy = AssetDatabase.LoadAssetAtPath("Assets/MyMaterialNew.mat", typeof(Material)) as Material;
// Move to Trash
if(AssetDatabase.MoveAssetToTrash(AssetDatabase.GetAssetPath(MaterialCopy)))
Debug.Log("MaterialCopy asset moved to trash");
// Delete
if(AssetDatabase.DeleteAsset(AssetDatabase.GetAssetPath(material)))
Debug.Log("Material asset deleted");
if(AssetDatabase.DeleteAsset("Assets/NewFolder"))
Debug.Log("NewFolder deleted");
// Refresh the AssetDatabase after all the changes
AssetDatabase.Refresh();
}
}
Using AssetDatabase.Refresh
When you have finished modifying assets, you should call AssetDatabase.Refresh to commit your changes to the database and make them visible in the project.
Page last updated: 2012-11-15BuildPlayerPipeline
When building a player, you sometimes want to modify the built player in some way. For example you might want to add a custom icon, copy some documentation next to the player or build an Installer. Doing this manually can become tedious and if you know how to write sh or perl scripts you can automate this task.
Mac OSX
After building a player Unity automatically looks for a sh or perl script called PostprocessBuildPlayer (without any extension) in your Project's Assets/Editor folder. If the file is found, it is invoked when the player finishes building.
In this script you can post process the player in any way you like. For example build an installer out of the player.
You can use perl, sh or any other commandline compatible language.
Unity passes some useful command line arguments to the script, so you know what kind of player it is and where it is stored.
The current directory will be set to be the project folder, that is the folder containing the Assets folder.
#!/usr/bin/perl
my $installPath = $ARGV[0];
# The type of player built:
# "dashboard", "standaloneWin32", "standaloneOSXIntel", "standaloneOSXPPC", "standaloneOSXUniversal", "webplayer"
my $target = $ARGV[1];
# What optimizations are applied. At the moment either "" or "strip" when Strip debug symbols is selected.
my $optimization = $ARGV[2];
# The name of the company set in the project settings
my $companyName = $ARGV[3];
# The name of the product set in the project settings
my $productName = $ARGV[4];
# The default screen width of the player.
my $width = $ARGV[5];
# The default screen height of the player
my $height = $ARGV[6];
print ("\n*** Building at '$installPath' with target: $target \n");
Note that some languages, such as Python, pass the name of the script as one of the command line arguments. If you are using one of these languages then the arguments will effectively be shifted along one place in the array (so the install path will be in ARGV[1], etc).
In order to see this feature in action please visit the Example Projects page on our website and download the PostprocessBuildPlayer example package file and import it for use into your own project. It uses the Build Player Pipeline feature to offer customized post-processing of web player builds in order to demonstrate the types of custom build behavior you can implement in your own PostprocessBuildPlayer script.
Windows
On Windows, the PostprocessBuildPlayer script is not supported, but you can use editor scripting to achieve the same effect. You can use BuildPipeline.BuildPlayer to run the build and then follow it with whatever postprocessing code you need:-
using UnityEditor;
using System.Diagnostics;
public class ScriptBatch : MonoBehaviour
{
[MenuItem("MyTools/Windows Build With Postprocess")]
public static void BuildGame ()
{
// Get filename.
string path = EditorUtility.SaveFolderPanel("Choose Location of Built Game", "", "");
// Build player.
BuildPipeline.BuildPlayer(levels, path + "BuiltGame.exe", BuildTarget.StandaloneWindows, BuildOptions.None);
// Copy a file from the project folder to the build folder, alongside the built game.
FileUtil.CopyFileOrDirectory("Assets/WebPlayerTemplates/Readme.txt", path + "Readme.txt");
// Run the game (Process class from System.Diagnostics).
Process proc = new Process();
proc.StartInfo.FileName = path + "BuiltGame.exe";
proc.Start();
}
}
Page last updated: 2012-05-04
Profiler
The Unity Profiler helps you to optimize your game. It reports for you how much time is spent in the various areas of your game. For example, it can report the percentage of time spent rendering, animating or in your game logic.
You can play your game in the Editor with Profiling on, and it will record performance data. The Profiler window then displays the data in a timeline, so you can see the frames or areas that spike (take more time) than others. By clicking anywhere in the timeline, the bottom section of the Profiler window will display detailed information for the selected frame.
Note that profiling has to instrument your code. This instrumentation has a small impact on the performance of your game. Typically this overhead is small enough to not affect the game framerate. When using profiling it is typical to consider only the ratio (or percentage) of time spent in certain areas. Also, to improve performance focus on those parts of the game that consume the most time. Compare profiling results before and after code changes and determine the improvements you measure. Sometimes changes you make to improve performance might have a negative effect on frame rate; unexpected consequences of code optimization should be expected.

Profiler window
Attaching to Unity players
To profile your game running on an other device or a player running on another computer, it is possible to connect the editor to that other player. The dropdown Active Profiler will show all players running on the local network. These players are identified by player type and the host name running the player "iPhonePlayer (Toms iPhone)". To be able to connect to a player, the player must be launched with the Development Build checkbox found in the Build Settings dialog. From here it is also possible to tick a checkbox to make the Editor and Player Autoconnect at startup.
Profiler Controls

Profiler controls are in the toolbar at the top of the window. Use these to turn profiling on and off, navigate through profiled frames and so on. The transport controls are at the far right end of the toolbar. Note that when the game is running and the profiler is collecting data clicking on any of these transport controls will pause the game. The controls go to the first recorded frame, step one frame back, step one frame forward and go to the last frame respectively. The profiler does not keep all recorded frames, so the notion of the first frame should really be though of as the oldest frame that is still kept in memory. The "current" transport button causes the profile statistics window to display data collected in real-time. The Active Profiler popup menu allows you to select whether profiling should be done in the editor or a separate player (for example, a game running on an attached iOS device).
Deep Profiling
When you turn on Deep Profile, all your script code is profiled - that is, all function calls are recorded. This is useful to know where exactly time is spent in your game code.
Note that Deep Profiling incurs a very large overhead and uses a lot of memory, and as a result your game will run significantly slower while profiling. If you are using complex script code, Deep Profiling might not be possible at all. Deep profiling should work fast enough for small games with simple scripting. If you find that Deep Profiling for your entire game causes the frame rate to drop so much that the game barely runs, you should consider not using this approach, and instead use the approach described below. You may find deep profiling more helpful as you are designing your game and deciding how to best implement key features. Note that for large games deep profiling may cause Unity to run out of memory and so for this reason deep profiling may not be possible.
Manually profiling blocks of your script code will have a smaller overhead than using Deep Profiling. Use Profiler.BeginSample and Profiler.EndSample scripting functions to enable and disable profiling around sections of code.
View SyncTime
When running at a fixed framerate or running in sync with the vertical blank, Unity records the waiting time in "Wait For Target FPS". By default this amount of time is not shown in the profiler. To view how much time is spent waiting, you can toggle "View SyncTime". This is also a measure of how much headroom you have before losing frames.
Profiler Timeline

The upper part of the Profiler window displays performance data over time. When you run a game, data is recorded each frame, and the history of the last several hundred frames is displayed. Clicking on a particular frame will display it's details in the lower part of the window. Different details are displayed depending on which timeline area is currently selected.
The vertical scale of the timeline is managed automatically and will attempt to fill the vertical space of the window. Note that to get more detail in say the CPU Usage area you can remove the Memory and Rendering areas. Also, the splitter between the timeline and the statistics area can be selected and dragged downward to increase the screen area used for the timeline chart.
The timeline consists of several areas: CPU Usage, Rendering and Memory. These areas can be removed by clicking the close button in the panel, and re-added again using the Add Area drop down in the Profile Controls bar.
CPU Usage Area

The CPU Usage area displays where time is spent in your game. When it is selected, the lower pane displays hierarchical time data for the selected frame.
- Hierarchy mode: Displays hierarchical time data.
- Group Hierarchy mode: Groups time data into logical groups (Rendering, Physics, Scripts etc.). Because children of any group can be in different group (e.g. some script might call rendering functions), the percentages of group times often add up to more than 100%. (This is not a bug.)
The way the CPU chart is stacked can be reordered by simply dragging chart labels up & down.
When an item is selected in the lower pane, it's contribution to the CPU chart is highlighted (and the rest are dimmed). Clicking on an item again de-selects it.

Shader.SetPass is selected and it's contribution is highlighted in the chart.
In the hierarchical time data the self time refers to the amount of time spent in a particular function not including the time spent calling sub-functions. In the screenshot above, for example 51.2% of time is spent in the Camera.Render function. This function does a lot of work and calls the various drawing and culling functions. Excluding all these functions only 0.8% of time is spent actually in the Camera.Render function.
Rendering Area

The Rendering area displays rendering statistics. The Number of Draw Calls, Triangles and Vertices rendered is displayed graphical in the timeline. The Lower pane displays more rendering statistics and these more closely match the ones shown in the GameView Rendering Statistics window.
Memory Area

The Memory area displays some memory usage data:
- Total Allocated is the total RAM used by the application. Note that in the Unity Editor this is memory used by everything in the editor; game builds will use much less.
- Texture Memory is the amount of video memory used by the textures in the current frame.
- Object Count is the total number of Objects that are created. If this number rises over time then it means your game is creating some objects that are never destroyed.
Audio Area

The Audio area displays audio statistics:
- Playing Sources is the total playing sources in the scene at a specific frame. Monitor this to see if audio is overloaded.
- Paused Sources is the total paused sources in the scene at a specific frame.
- Audio Voice is the actually number of audio (FMOD channels) voices used. PlayOneShot is using voices not shown in Playing Sources.
- Audio Memory is the total RAM used by the audio engine.
CPU usage can be seen in the bottom. Monitor this to see if Audio alone is taking up too much CPU.
Note: When an audio asset in Ogg Vorbis format is imported with the Compressed In Memory option, the memory usage reported by the profiler may be unexpectedly low. This happens for platforms that use FMOD audio - FMOD doesn't support Ogg Vorbis with the Compressed In Memory option, so the import setting is silently changed to Stream From Disk (which has much lower memory overheads).
Physics Area

The Physics area shows the following statistics about the physics in the scene:-
- Active Rigidbodies is the number of rigidbodies that are not currently sleeping (ie, they are moving or just coming to rest).
- Sleeping Rigidbodies is the number of rigidbodies that are completely at rest and therefore don't need to be updated actively by the physics engine (see Rigidbody Sleeping for further details).
- Number of Contacts is the total number of points of contact between all colliders in the scene.
- Static Colliders is the number of colliders attached to non-rigidbody objects (ie, objects which never move under physics).
- Dynamic Colliders is the number of colliders attached to rigidbody objects (ie, objects which do move under physics).
GPU Area

The GPU profiler is similar to the CPU profiler with the various contributions to rendering time shown as a hierarchy in the bottom panel. Selecting an item from the hierarchy will show a breakdown in the panel to the right.
Please note that on the Mac, GPU profiling is only available under OSX 10.7 Lion and later versions.
See Also
iOS
Remote profiling can be enabled on iOS devices by following these steps:
- Connect your iOS device to your WiFi network (local/adhoc WiFi network is used by profiler to send profiling data from device to the Unity Editor).
- Check "Autoconnect Profiler" checkbox in Unity's build settings dialog.
- Attach your device to your Mac via cable and hit "Build & Run" in Unity Editor.
- When app launches on device open profiler window in Unity Editor (Window->Profiler)
If you are using a firewall, you need to make sure that ports 54998 to 55511 are open in the firewall's outbound rules - these are the ports used by Unity for remote profiling.
Note: Sometimes Unity Editor might not autoconnect to the device. In such cases profiler connection might be initiated from Profiler Window Active Profiler drop down menu by select appropriate device.
Android
Remote profiling can be enabled on Android devices through two different paths : WiFi or ADB.
For WiFi profiling, follow these steps:
- Make sure to disable Mobile Data on your Android device.
- Connect your Android device to your WiFi network.
- Check the "Autoconnect Profiler" checkbox in Unity's build settings dialog.
- Attach your device to your Mac/PC via cable and hit "Build & Run" in Unity Editor.
- When the app launches on the device, open the profiler window in Unity Editor (Window->Profiler)
- If the Unity Editor fails to autoconnect to the device, select the appropriate device from the Profiler Window Active Profiler drop down menu.
Note: The Android device and host computer (running the Unity Editor) must both be on the same subnet for the device detection to work.
For ADB profiling, follow these steps:
- Attach your device to your Mac/PC via cable and make sure ADB recognizes the device (i.e. it shows in adb devices list).
- Open a Terminal window / CMD prompt and enter
adb forward tcp:54999 localabstract:Unity-<insert bundle identifier here>
- Check the "Development Build" checkbox in Unity's build settings dialog, and hit "Build & Run".
- When the app launches on the device, open the profiler window in Unity Editor (Window->Profiler)
- Select the AndroidProfiler(ADB@127.0.0.1:54999) from the Profiler Window Active Profiler drop down menu.
Note: The entry in the drop down menu is only visible when the selected target is Android.
If you are using a firewall, you need to make sure that ports 54998 to 55511 are open in the firewall's outbound rules - these are the ports used by Unity for remote profiling.
Lightmapping
This an introductory description of lightmapping in Unity. For more advanced topics see in-depth description of lightmapping in Unity
Unity has a built-in lightmapper: it's Beast by Illuminate Labs. Lightmapping is fully integrated in Unity. This means that Beast will bake lightmaps for your scene based on how your scene is set up within Unity, taking into account meshes, materials, textures and lights. It also means that lightmapping is now an integral part of the rendering engine - once your lightmaps are created you don't need to do anything else, they will be automatically picked up by the objects.

Preparing the scene and baking the lightmaps
Selecting – from the menu will open the Lightmapping window:
- Make sure any mesh you want to be lightmapped has proper UVs for lightmapping. The easiest way is to choose the option in mesh import settings.
- In the pane mark any Mesh Renderer, Skinned Mesh Renderer or Terrain as – this will tell Unity, that those objects won't move nor change and they can be lightmapped.
- To control the resolution of the lightmaps, go to the pane and adjust the value. (To have a better understanding on how you spend your lightmap texels, look at the small window within the and select ).
- Press
- A progress bar appears in Unity Editor's status bar, in the bottom right corner.
- When baking is done, you can see all the baked lightmaps at the bottom of the Lightmap Editor window.
Scene and game views will update - your scene is now lightmapped!
Tweaking Bake Settings
Final look of your scene depends a lot on your lighting setup and bake settings. Let's take a look at an example of some basic settings that can improve lighting quality.
This is a basic scene with a couple of cubes and one point light in the centre. The light is casting hard shadows and the effect is quite dull and artificial.
Selecting the light and opening the pane of the window exposes Shadow Radius and Shadow Samples properties. Setting Shadow Radius to 1.2, Shadow Samples to 100 and re-baking produces soft shadows with wide penumbra - our image already looks much better.
With Unity Pro we can take the scene one step further by enabling Global Illumination and adding a Sky Light. In the pane we set the number of Bounces to 1 and the Sky Light Intensity to 0.5. The result is much softer lighting with subtle diffuse interreflection effects (color bleeding from the green and blue cubes) - much nicer and it's still only 3 cubes and a light!

Lightmapping In-Depth
For more information about the various lightmapping-related settings, please refer to the in-depth description of lightmapping in Unity.
Page last updated: 2012-10-31LightmappingInDepth
Unity で最初のシーンにライトマップ使用している場合、この Quickstart Guide が役に立つでしょう。
ライトマッピングは、Unity に完全に統合されているため、エディタ内からレベル全体を作成し、そのレベルにライトマップを行い、気にすることなく、マテリアルに自動的にライトマップを選択させることができます。 Unity でのライトマッピングは、パフォーマンス向上のため、すべてのライトのプロパティが Beast ライトマッパに直接マッピングされ、テクスチャにベークされるということです。 Unity Pro は、グローバル照明によってこの機能を拡張しますが、これにより、そうでない場合は、リアルタイムでは不可能な、よりリアルで美しいライティングをベークできます。 さらに、Unity Pro には、より素晴らしいシーン ライティングのためのスカイ ライトや発光マテリアルが用意されています。
このページでは、ライト マッピング ウィンドウにあるすべての属性の詳細な説明を記載します。 ライト マッピング ウィンドウを開くには、 – を選択します。

Scene filters
At the top of the inspector are three Scene Filter buttons that enable you to apply the operation to all objects or to restrict it to lights or renderers.
オブジェクト
ライト、メッシュ レンダラおよび地形に対するオブジェクトごとのベーク設定 - 現在の選択内容によって決まります。
メッシュ レンダラおよび地形:
| Lightmap Static | メッシュ レンダラおよび地形は、ライトマッピングされるように、スタティックとしてマーキングする必要があります。 |
| Scale In Lightmap | (メッシュ レンダラのみ) この値が大きいと、解像度が所定のメッシュ レンダラ専用になります。 最終的な解像度は比例します (ライトマップでのスケール)*(オブジェクトの世界空間表面エリア)*(グローバル ベーク設定解像度値)。 0 の値の場合、オブジェクトはライトマッピングされません (その他のライトマッピングされたオブジェクトには影響します)。 |
| Lightmap Size | (地形のみ) この地形インスタンスのライトマップ サイズ。 地形は他のオブジェクトとして変更されません。代わりに、個々にライトマップされます。 |
| Atlas | Lock Atlas が無効になると、アトラスティング情報 – が自動的に更新されます。 Lock Atlas が有効になると、これらのパラメータは自動的に編集されなくなり、手動で編集できます。 |
| Lightmap Index | ライトマップ配列への索引。 |
| Tiling | (メッシュ レンダラのみ) オブジェクトのライトマップ UV のタイリング。 |
| Offset | (メッシュ レンダラのみ) オブジェクトのライトマップ UV のオフセット。 |
ライト:
| Lightmapping | ライトマッピング モード。 Realtime Only、Auto または Baked Only のいずれかになります。 下記の Dual Lightmaps を参照してください。 |
| Color | ライトの色。 リアルタイムのレンダリングに同じプロパティが使用されます。 |
| Intensity | ライトの強度。 リアルタイムのレンダリングに同じプロパティが使用されます。 |
| Bounce Intensity | この特定の光源から放出される間接的な光の強度への乗数。 |
| Baked Shadows | シャドウがこのライトで照らされたオブジェクトから投影されるかどうかをコントロールします (自動ライトの場合と同時にリアルタイムのシャドウをコントロールします)。 |
| Shadow Radius | (ポイントおよびスポット ライトのみ) 柔らかい直接のシャドウの場合はこの値を上げます。シャドーイング (ライティングは対象外) 計算に対して、ライトのサイズを上げます。 |
| Shadow Angle | (ディレクショナル ライトのみ) (ポイントおよびスポット ライトのみ) 柔らかい直接のシャドウの場合はこの値を上げます。シャドーイング (ライティングは対象外) 計算に対して、ライトの角度範囲を上げます。 |
| Shadow Samples | 上記の Shadow Radius または Angle を 0 に設定した場合、Shadow Samples の数も増します。 サンプル数が多いと、シャドウの半影からノイズが減りますが、レンダリング時間が増える場合があります。 |
ベーク
グローバル ベーク設定
| Mode | オフラインのライトマップ ベーキングとリアルタイムのライトマップ レンダリング モードの両方をコントロールします。 Dual Lightmaps モードでは、近くおよび遠くのライトマップの両方がベークされ、遅延レンダリングパスのみ 2 重ライトマップのレンダリングをサポートします。 Single Lightmaps モードは、遠くのライトマップのみベークし、遅延レンダリング パスに対して、1 つのライトマップ モードを強制するのに使用できます。 |
| Use in forward rendering | (Dual lightmaps only) Enables dual lightmaps in forward rendering. Note that this will require you to create your own shaders for the purpose. |
| Quality | ベークの質を高 (見た目が良くなります) および低 (高速) に設定します。 これらは、最終集光数、コントラスト閾値、その他の最終収集およびアンチエイリアス処理設定に影響します。 |
| Bounces | グローバル照明シミュレーションでのライト バウンスの数。 柔らかく、リアルな間接ライティングを得るには、1 つ以上のバウンスが必要です。 0 の場合、直接光のみ計算されます。 |
| Sky Light Color | スカイ ライトは、すべての方向に空から放出サれる光をシミュレートします。屋外のシーンに最適です。 |
| Sky Light Intensity | スカイ ライトの強度。0 の場合、スカイ ライトは無効になります。 |
| Bounce Boost | 間接的な光を強化します。レンダリングを過剰に速くバーンアウトさせずに、シーン内で跳ね返りのライトの量を増やすのに使用できます。 |
| Bounce Intensity | 間接的な光の強度の乗数。 |
| Final Gather Rays | すべての最後の集光点から放たれる光の数。値が高いほど、質が向上します。 |
| Contrast Threshold | 色コントラスト閾値。この上で、新しい最終集光点が適応サンプリング アルゴリズムによって作成されます。 値が高いほど、Beast の表面上の照明の変更に対する許容度が上がり、円滑ですが、細部の少ないライトマップが生成されます。 最終集光の数が少ないと、これ以上最終集光点が作成されないようにするため、高いコントラスト閾値が必要になる場合があります。 |
| Interpolation | 最終集光点からの色がどのように補間されるかをコントロールします。 0 は、線形補間、1 は高度な、階調度ベースの補間になります。 後者のほうがアーティファクトが増える場合があります。 |
| Interpolation Points | 間に補間される最終集光点の数。 値が高いほど、結果が円滑になりますが、ライティングでの細部も円滑になります。 |
| Ambient Occlusion | ライトマップにベークされる周囲オクルージョンの量。 周囲オクルージョンはサイズ Max Distance のローカルな半球上で統合された表示関数のため、ライティング情報は考慮しません。 |
| Lock Atlas | ロック アトラスが有効になると、自動アトラシングは実行されなくなり、オブジェクト上の Lightmap Index、Tiling および Offset は編集されなくなります。 |
| Resolution | 世界単位ごとのテクセルたんいのライトマップの解像度。50 の値と 10x10 の面の場合、ライトマップで面が 500x500 ピクセルを有する面になります。 |
| Padding | The blank space left between individual items on the atlas, given in texel units (0..1). |
マップ
すべてのライトマップの編集可能な配列。
| Compressed | このシーンに対するすべてのライトマップ上での圧縮を切り替えます。 |
| Array Size | ライトマップのサイズ (0 〜 254)。 |
| Lightmaps Array | 現在のシーンでのすべてのライトマップの編集可能な配列。 割り当てていないスロットは黒いライトマップとして扱われます。 メッシュ レンダラと地形上のライトマップ インデックス値に対応したインデックス。 ロック アトラスが有効にならない限り、この配列は、自動でリサイズさ、ライトマップのベーク時に常に投入されます。 |
ライトマップ ディスプレイ
エディタでのライトマップがどのように表示されるかをコントロールするユーティリティ。 ライトマップ ディスプレイ はシーン ビューのサブウィンドウで、ライトマッピング ウィンドウの表示時に毎回表示されます。
| Use Lightmaps | レンダリング中にライトマップを使用するかしないか。 |
| Shadow Distance | Auto ライトと Close By ライトマップが Far Away ライトマップにフェードアウトする距離。 この設定は QualitySettings.shadowDistance 設定をク項にしますが、上書きはしません。 |
| Show Resolution | シーン ビューのライトマップ解像度モードを切り替えます。これにより、スタティックとマーキングされたオブジェクトでライトマップのテクセルがどのように消費されるかをプレビューできます。 |
詳細
二重ライトマップ
二重ライトマップは、ライトマッピングをスペキュラ、法線マッピングおよび__ベークされたおよびリアルタイムのシャドウの適切なブレンディングを連携させる Unity の手法です。 また、ライトマップ解像度が低くても、ライトマップの見た目を良くする方法もあります。
2 重ライトマップはデフォルトでは、Deferred Lighting レンダリング パスでのみ使用できます。 フォワード レンダリング パスでは、カスタムのシェーダを記述することで、2 重ライトマップを有効に出来ます (dualforward 表面シェーダ ディレクティブを使用します)。
2 重ライトマップは、次の 2 組みのライトマップを使用します。
- Far: 完全な署名を含みます。
- Near: Autoとマーキングされたライトからの間接照明、Bake Onlyとマーキングされたライトからの完全な照明、放射マテリアルおよびスカイ ライトを含みます。
Realtime Only ライトはベークされません。 Near ライトマップ セットは、Shadow Distance 質設定よりもカメラからの距離内で使用されます。
この距離内では、Autoライトは、スペキュラおよびリアルタイム シャドウのあるリアルタイム ライトとしてレンダリングされ (これにより、そのシャドウが Realtime Only ライトからのシャドウと正しくブレンドされます)、その間接的な光はライトマップから取られます。 Outside Shadow Distance Auto ライトは、リアルタイムでレンダリングを行わなくなり、_完全な照明は、Farライトマップから取られます (Realtime Onlyライトはまだそこにありますが、シャドウは無効になります)。
下のシーンには、ライトマッピング モードがデフォルトの Auto に設定された 1 つのディレクショナル ライトと、多くのライトマップが適用されたスタティック オブジェクト (建物、障害物、動かせない細部) および一部の動的移動および可動オブジェクト (銃、銃身のあるダミー) が含まれます。 シーンは、2 重ライトマップ モードでベークおよびレンダリングされます。 シャドウ距離の裏では、ライトマップによってのみ建物が完全に照らされ、2 つのダミーは自動的に照らされますが、シャドウを投影しません。シャドウ距離の前では、両方のダミーとライトマップが適用されたスタティックな建物および地面がリアルタイムで照らされ、リアルタイムのシャドウを投影しますが、柔らかい間接的な光は近くのライトマップから放たれます。


シングル ライトマップ
シングル ライトマップははるかに簡単な手法ですが、どの rendering path で使用できます。 すべてのスタティック照明 (つまり、ベーク済みのものと自動ライト、スカイ ライトおよび放射マテリアル) は 1 つのライトマップのセットにベークされます。 これらのライトマップは、シャドウの距離に関係なく、ライトマップが適用されたオブジェクトすべてに使用されます。
動的シャドウの強度をベークされたシャドウに一致させるには、ライトのShadow Strengthプロパティを手動で調整する必要があります。

ライトのシャドウの強度を元の値 1.0 から、0.7 に調整
ライトマップが適用されたマテリアル:
Unity では、ライトマップに使用する特殊なマテリアルを使用する必要はありません。 組み込みシェーダからのシェーダ (およびその場合は、記述した表面シェーダ) は、木にする必要なく、ボックスからのライトマップをサポートしているので、ただ機能するだけです。
ライトマップ解像度
Resolutionベーク設定により、見た目を良くするためにシーンに必要な単位ごとのテクセル数をコントロールできます。 シーンに 1x1 単位面があり、解像度を単位あたり 10 に設定すると、ライトマップで面が 10x10 テクセルを使用します。 解像度ベーク設定はグローバル設定です。 特殊なオブジェクトに対してこれを編集したい (ライトマップで非常に小さくまたは非常に大きくしたい) 場合、メッシュ レンダラのScale in Lightmapプロパティを使用します。 ライトマップでスケールを 0 に設定すると、オブジェクトはライトマッピングされません (その他のオブジェクトのライトマップには影響します)。 Lightmap Resolutionシーン ビュー レンダー モードを使用すると、ライトマップ テクセルがどのように消費されるかをプレビューできます。

ライトマップ テクセルがどのように使用されるかを示すライトマップ解像度シーン ビュー (各正方形が 1 つのテクセル)。
UV
ライトマップしたいメッシュに、ライトマッピング用の適切な UV があることを確認してください。 最も簡単な方法は、所定のメッシュのメッシュ インポート設定で Generate Lightmap UVs オプションを有効にすることです。
詳細については、Lightmap UVs ページを参照してください。
マテリアル プロパティ
次のマテリアル プロパティが、Beast の内部シーン表示にマッピングされます。
- カラー
- ムービー テクスチャ
- スペキュラ色
- 輝かしさ
- 透明度
- アルファ ベース: 透明シェーダを使用すると、メイン テクスチャのアルファ チャンネルは透明度をコントロールします。
- 色ベース: Beast の RGB 透明度は、_TransparencyLM というプロパティをシェーダに追加することで有効にできます。 この透明度は、アルファ ベースの透明度に比べ、逆方向に定義されます。 値 (1, 0, 0) のあるピクセルは、赤色のライト コンポーネントの場合完全に透明で、緑および青のコンポーネントの場合は完全に不透明になります。これにより、赤色の影が生じます。同じ理由で、白のテクスチャは完全に透明になり、黒のテクスチャは完全に不透明になります。
- 放出
- 自己発光マテリアルは色およびメイン テクスチャで色のついた光を放ち、照明テクスチャでマスクされます。 放出された光の強度は、放出プロパティに比例します (0 は放出を無効にします)。
- 一般に大きくて、ぼんやりとした光源は、放出マテリアルのあるオブジェクトとしてモデリングできます。 放出マテリアルはレンダリング時にノイズを生じる場合があるので、小さくて、強い光の場合、法線ライト タイプを使用する必要があります。
注意: Beast にマテリアルをマッピングすると、Unity は、シェーダのプロパティおよびスペキュラ、透明さ、自己発光などのパス/名前キーワードによって シェーダの種類''を検出します。
Skinned Mesh Renderers
Having skinned meshes that are static makes your content more flexible, since the shape of those meshes can be changed in Unity after import and can be tweaked per level. Skinned Mesh Renderers can be lightmapped in exactly the same way as Mesh Renderers and are sent to the lightmapper in their current pose.
Lightmapping can also be used if the vertices of a mesh are moved at runtime a bit -- the lighting won't be completely accurate, but in a lot of cases it will match well enough.
詳細
自動アトラシング
アトラシング (UV パッキング) は、ベークを行う際に毎回自動的に実行されるので、通常は気にする必要はありません。ただ機能するだけです。
オブジェクトの世界空間の表面エリアはオブジェクトごとの Scale In Lightmap 値よって乗算され、、グローバル解像度と結果によって、ライトマップでのオブジェクトの UV セットのサイズを決定します (より正確には: [0,1]x[0,1] UV 正方形のサイズ)。 次に、すべてのオブジェクトは、極力少ない数のライトマップにパックされ、それぞれが前のステップで計算された空間量を占めるようにします。 所定のオブジェクトに対する UV セットが [0,1]x[0,1] 正方形の部分のみ占める場合、ほとんどの場合、アトラシングは、近隣の UV セットを近くに移動させ、開いている空間を使用します。
アトラシングの結果、ライトマップが適用されるすべてのオブジェクトがライトマップのいずれかにその場所を持ち、どの空間はその他のオブジェクトの空間に重なりません。 アトラシング情報は、メッシュ レンダラの ライトマップ インデックス、タイリング (スケール)、オフセットの 3 つの値として、 地形でのライトマップ インデックスとして格納され、 ライトマッピング ウィンドウのオブジェクト ペインを介して、表示および編集できます。

ライトマップを右クリックすると、選択したライトマップを使用しているすべてのゲーム オブジェクトを選択できます。 現在の選択内容からのアクティブなオブジェクトのライトマップは黄色で強調表示されます。''
Atlasing は、Lightmap Index、Tiling および Offset であるオブジェクトごとのデータのみ編集可能で、 オブジェクトの UV セットは、共有されたメッシュの部分として格納されるため、編集できません。 メッシュの Lightmap UV は、インポート時にのみ、Unity の組み込み自動ラッパーを使用して、または Unity 外部 3D パッケージでのみ作成できます。
Lock Atlas
Lock Atlas が有効になると、自動アトラシングは実行されなくなり、オブジェクト上の Lightmap Index、Tiling および Offset は編集されなくなります。 Beast は、現在のアトラシングであるものに依存しているため、ユーザーが行うのは、正しいアトラシングの維持のみになります (例えば、ライトマップでのラッピング オブジェクトなし、ライトマップ配列の端を過ぎた、ライトマップ スロットを参照するオブジェクトなし)。
Lock Atlas は、ライトマッピングのためのオブジェクトの送信時の別のワークフローの可能性を開きます。 手動またはスクリプティングを介して、特定のニーズに合うよう、アトラシングを実行できます。また、現在のアトラシングに満足し、シーンに対してより多くのライトマップの接テオをベークし、シーンに更にオブジェクトを追加した後に、アトラシングがその他のライトマップ セットとの互換性を持たせないようにしたい場合、自動生成されたアトラシングをロックできます。
Lock Atlas は、アトラシングのみロックし、メッシュの UV はしません。 ソース メッシュを変更し、メッシュ インポータをライトマップ UV を生成するよう設定すると、UV が異なる形で生成され、現在のライトマップがオブジェクトで正しく表示されません。これを修復するには、ライトマップを再ベークする必要があります。
カスタムの Beast ベーク設定
ベーク処理をよりコントロールしたい場合は、 custom Beast settings ページを参照してください。
Page last updated: 2012-11-13LightmappingCustomSettings
If you need a different baking setup than the one Unity is using by default, you can specify it by using custom Beast settings.
Beast reads bake settings defined in XML format. Normally Unity generates the XML file based on the configuration you have chosen in Bake pane of the Lightmap Editor window and a number of other internal settings. You can override those settings by specifying your own settings in Beast's XML format.
To have Unity automatically generate the XML file for you, click the tab menu in the upper-right corner of the Lightmap Editor window and select Generate Beast settings file. You will notice that the BeastSettings.xml file appeared in the project next to your lightmaps and that the Lightmap Editor informs you, that your XML settings will override Unity's settings during the next bake. Click the open button to edit your custom settings.

A sample Beast configuration file is given below:-
<?xml version="1.0" encoding="ISO-8859-1"?>
<ILConfig>
<AASettings>
<samplingMode>Adaptive</samplingMode>
<clamp>false</clamp>
<contrast>0.1</contrast>
<diagnose>false</diagnose>
<minSampleRate>0</minSampleRate>
<maxSampleRate>2</maxSampleRate>
<filter>Gauss</filter>
<filterSize>
<x>2.2</x>
<y>2.2</y>
</filterSize>
</AASettings>
<RenderSettings>
<bias>0</bias>
<maxShadowRays>10000</maxShadowRays>
<maxRayDepth>6</maxRayDepth>
</RenderSettings>
<EnvironmentSettings>
<giEnvironment>SkyLight</giEnvironment>
<skyLightColor>
<r>0.86</r>
<g>0.93</g>
<b>1</b>
<a>1</a>
</skyLightColor>
<giEnvironmentIntensity>0</giEnvironmentIntensity>
</EnvironmentSettings>
<FrameSettings>
<inputGamma>1</inputGamma>
</FrameSettings>
<GISettings>
<enableGI>true</enableGI>
<fgPreview>false</fgPreview>
<fgRays>1000</fgRays>
<fgContrastThreshold>0.05</fgContrastThreshold>
<fgGradientThreshold>0</fgGradientThreshold>
<fgCheckVisibility>true</fgCheckVisibility>
<fgInterpolationPoints>15</fgInterpolationPoints>
<fgDepth>1</fgDepth>
<primaryIntegrator>FinalGather</primaryIntegrator>
<primaryIntensity>1</primaryIntensity>
<primarySaturation>1</primarySaturation>
<secondaryIntegrator>None</secondaryIntegrator>
<secondaryIntensity>1</secondaryIntensity>
<secondarySaturation>1</secondarySaturation>
<fgAOInfluence>0</fgAOInfluence>
<fgAOMaxDistance>0.223798</fgAOMaxDistance>
<fgAOContrast>1</fgAOContrast>
<fgAOScale>2.0525</fgAOScale>
</GISettings>
<SurfaceTransferSettings>
<frontRange>0.0</frontRange>
<frontBias>0.0</frontBias>
<backRange>2.0</backRange>
<backBias>-1.0</backBias>
<selectionMode>Normal</selectionMode>
</SurfaceTransferSettings>
<TextureBakeSettings>
<bgColor>
<r>1</r>
<g>1</g>
<b>1</b>
<a>1</a>
</bgColor>
<bilinearFilter>true</bilinearFilter>
<conservativeRasterization>true</conservativeRasterization>
<edgeDilation>3</edgeDilation>
</TextureBakeSettings>
</ILConfig>
The toplevel XML elements are described in the sections below along with their subelements.
Adaptive Sampling (<AASettings> element)
Beast uses an adaptive sampling scheme when sampling light maps. The light must differ more than a user set contrast threshold for Beast to place additional samples in an area. The sample area is defined by a Min and Max sample rate. The user sets the rate in the -4..4 range which means that Beast samples from 1/256 sample per pixel to 256 samples per pixel (the formula is: 4 to the power of samplerate). It is recommended to use at least one sample per pixel for production use (Min sample rate = 0). Undersampling is most useful when doing camera renders or baking textures with big UV-patches. When Beast has taken all necessary samples for an area, the final pixel value is weighed together using a filter. The look the filter produces is dependent on the filter type used and the size of the filter kernel. The available filters are:
- Box: Each sample is treated as equally important. The fastest filter to execute but it gives blurry results.
- Triangle: The filter kernel is a tent which means that distant samples are consideredless important.
- Gauss: Uses the Gauss function as filter kernel. This gives the best results (removes noise, preserves details).
There are more filters available, but these three are the most useful. The kernel (filter) size is given in pixels in the range 1..3. Beast actually uses all sub pixels when filtering, which yields better results than doing it afterwards in Photoshop.
| AASettings | |
| samplingMode | The sampling strategy to use. Default is Adaptive. Adaptive: Adaptive anti-aliasing scheme for under/over sampling (from 1/256 up to 256 samples per pixel). SuperSampling: Anti-aliasing scheme for super sampling (from 1 up to 128 samples per pixel). |
| minSampleRate | Sets the min sample rate, default is 0 (ie one sample per pixel). |
| maxSampleRate | Sets the max sample rate, the formula used is 4^maxSampleRate (1, 4, 16, 64, 256 samples per pixel) |
| contrast | The contrast value which controls if more samples are necessary - a lower value forces more samples. |
| filter | Sets which filter type to use. Most useful ones for Baking are Box, Triangle and Gauss. |
| filterSize | Sets the filter size in pixels, from 1 to 3. |
| diagnose | Enable to diagnose the sampling. The brighter a pixel is, the more samples were taken at that position. |
Texture Bake (<TextureBakeSettings> element)
These settings help getting rid of any artifacts that are purely related to how lightmaps are rasterized and read from a texture.
| TextureBakeSettings | |
| edgeDilation | Expands the rendered region with the number of pixels specified. This is needed to prevent the artifacts occurring when GPU filters in empty pixels from around the rendered region. Should be set to 0 though, since a better algorithm is part of the import pipeline. |
| bilinearFilter | Is used to make sure that the data in the lightmap is "correct" when the GPU applies bilinear filtering. This is most noticable when the atlases are tightly packed. If there is only one pixel between two different UV patches, the bilinear functionality in Beast will make sure the that pixel is filled with the color from the correct patch. This minimizes light seams. |
| conservativeRasterization | Is used when the UV-chart does not cover the entire pixel. If such a layout is used, Beast may miss the texel by mistake. If conservative rasterization is used Beast will guarantee that it will find a UV-layout if present. Note that Beast will pick any UV-layout in the pixel. Conservative Rasterization often needs to be turned on if the UV atlases are tightly packed in low resolutions or if there are very thin objects present. |
| bgColor | The background color of the lightmap. Should be set to white (1,1,1,1). |
Environment (<EnvironmentSettings> element)
The environment settings in Beast control what happens if a ray misses all geometry in the scene. The environment can either be a constant color or an HDR image in lat-long format for Image Based Lighting (IBL). Note that environments should only be used for effects that can be considered to be infinitely far away, meaning that only the directional component matters.
Defining an environment is usually a very good way to get very pleasing outdoor illumination results, but might also increase bake times.
| EnvironmentSettings | |
| giEnvironment | The type of Environment: None, Skylight or IBL. |
| giEnvironmentIntensity | A scale factor for the intensity, used for avoiding gamma correction errors and to scale HDR textures to something that fits your scene. (in Unity: Sky Light Intensity) |
| skyLightColor | A constant environment color. Used if type is Skylight. It is often a good idea to keep the color below 1.0 in intensity to avoid boosting by gamma correction. Boost the intensity instead with the giEnvironmentIntensity setting. (in Unity: Sky Light Color) |
| iblImageFile | High-dynamic range IBL background image in Long-Lat format, .HDR or .EXR, absolute path. |
Render Settings/Shadows (<RenderSettings> element)
Settings for ray-traced shadows.
| RenderSettings | |
| bias | An error threshold to avoid double intersections of shadow rays. For example, a shadow ray should not intersect the same triangle as the primary ray did, but because of limited numerical precision this can happen. The bias value moves the intersection point to eliminate this problem. If set to zero this value is computed automatically depending on the scene size. |
| maxShadowRays | The maximum number of shadow rays per point that will be used to generate a soft shadow for any light source. Use this to shorten render times at the price of soft shadow quality. This will lower the maximum number of rays sent for any light sources that have a shadowSamples setting higher than this value, but will not raise the number if shadowSamples is set to a lower value. |
| maxRayDepth | The maximum amount of bounces a ray can have before being considered done. A bounce can be a reflection or a refraction. Increase the value if a ray goes through many transparent triangles before hitting an opaque object and you get light in areas that should be in the shadow. Common failure case: trees with alpha-tested leaves placed in a shadow of a mountain. |
| giTransparencyDepth | Maximum transparency depth for global illumination rays, i.e. the number of transparent surfaces the ray can go through, before you can assume it has been absorbed. Lower values speed up rendering, in scenes with, e.g. dense foliage, but may cause overlapping transparent objects to cast too much shadow. The default is 2. |
Global Illumination (<GISettings> element)
The Global Illumination system allows you to use two separate algorithms to calculate indirect lighting. You can for instance calculate multiple levels of light bounces with a fast algorithm like the Path Tracer, and still calculate the final bounce with Final Gather to get a fast high-quality global illumination render. Both subsystems have individual control of Intensity and Saturation to boost the effects if necessary.
It's recommended to use FinalGather as the primary integrator and either None or PathTracer as the secondary integrator. Unity uses the first option (so final gather only) as the default, since it produces the best quality renders in most cases. Path Tracer should be used if many indirect bounces are needed and Final Gather-only solution with acceptable quality would take to much time to render.
| GISettings | |
| enableGI | Setting to true enables Global Illumination. |
| primaryIntegrator | The integrator used for the final calculations of indirect light. FinalGather is default. |
| secondaryIntegrator | The integrator used for initial bounces of indirect light. Default is None, PathTracer is optional. |
| primaryIntensity | As a post process, converts the color of the primary integrator result from RGB to HSV and scales the V value. (in Unity: Bounce Intensity) |
| primarySaturation | As a post process, converts the color of the primary integrator result from RGB to HSV and scales the S value. |
| secondaryIntensity | As a post process, converts the color of the secondary integrator result from RGB to HSV and scales the V value. |
| secondarySaturation | As a post process, converts the color of the secondary integrator result from RGB to HSV and scales the S value. |
| diffuseBoost | This setting can be used to exaggerate light bouncing in dark scenes. Setting it to a value larger than 1 will push the diffuse color of materials towards 1 for GI computations. The typical use case is scenes authored with dark materials, this happens easily when doing only direct lighting since it is easy to compensate dark materials with strong light sources. Indirect light will be very subtle in these scenes since the bounced light will fade out quickly. Setting a diffuse boost will compensate for this. Note that values between 0 and 1 will decrease the diffuse setting in a similar way making light bounce less than the materials says, values below 0 is invalid. The actual computation taking place is a per component pow(colorComponent, (1.0 / diffuseBoost)). (in Unity: Bounce Boost) |
| fgPreview | Enable for a quick preview of the final image lighting. |
Final Gather
The settings below control the quality or correctness of the Final Gather solution. The normal usage scenario is this:
- For each baking set up Contrast Threshold and Number of Rays may be adjusted. There are no perfect settings for these since they depend on the complexity of the geometry and light setup.
- Check Visibility and Light Leakage reduction are expensive operations and should only be used to remedy actual light leakage problems. These settings will only help if the light leakage is caused by the Global Illumination calculations. A very common light leakage situation occurs with a wall as a single plane with no thickness. The light leaking through in that situation does not come from GI.
- Gradient threshold should only be changed if there are white halos around corners.
Steps 2 and 3 should not need much tweaking in most scenes.
| GISettings | |
| fgContrastThreshold | Controls how sensitive the final gather should be for contrast differences between the points during precalculation. If the contrast difference is above this threshold for neighbouring points, more points will be created in that area. This tells the algorithmto place points where they are really needed, e.g. at shadow boundaries or in areas where the indirect light changes quickly. Hence this threshold controls the number of points created in the scene adaptively. Note that if a low number of final gather rays are used, the points will have high variance and hence a high contrast difference. In that the case contrast threshold needs to be raised to prevent points from clumping together or using more rays per sample. (in Unity: Contrast Threshold) |
| fgRays | The maximum number of rays taken in each Final Gather sample. More rays gives better results but take longer to evaluate. (in Unity: Final Gather Rays) |
| fgCheckVisibility | Turn this on to reduce light leakage through walls. When points are collected to interpolate between, some of them can be located on the other side of geometry. As a result light will bleed through the geometry. To prevent this Beast can reject points that are not visible. |
| fgCheckVisibilityDepth | Controls for how many bounces the visibility checks should be performed. Adjust this only if experiencing light leakage when using multi bounce Final Gather. |
| fgLightLeakReduction | This setting can be used to reduce light leakage through walls when using final gather as primary GI and path tracing as secondary GI. Leakage, which can happen when e.g. the path tracer filters in values on the other side of a wall, is reduced by using final gather as a secondary GI fallback when sampling close to walls or corners. When this is enabled a final gather depth of 3 will be used automatically, but the higher depths will only be used close to walls or corners. Note that this is only usable when path tracing is used as secondary GI. |
| fgLightLeakRadius | Controls how far away from walls the final gather will be called again, instead of the secondary GI. If 0.0 is used Beast will try to estimate a good value. If this does not eliminate the leakage it can be set to a higher value manually. |
| fgGradientThreshold | Controls how the irradiance gradient is used in the interpolation. Each point stores its irradiance gradient which can be used to improve the interpolation. In some situations using the gradient can result in white "halos" and other artifacts. This threshold can be used to reduce those artifacts (set it low or to 0). (in Unity: Interpolation) |
| fgInterpolationPoints | Sets the number of final gather points to interpolate between. A higher value will give a smoother result, but can also smooth out details. If light leakage is introduced through walls when this value is increased, checking the sample visibility solves that problem. (in Unity: Interpolation Points) |
| fgNormalThreshold | Controls how sensitive the final gather should be for differences in the points normals. A lower value will give more points in areas of high curvature. |
| fgDepth | Controls the number of indirect light bounces. A higher value gives a more correct result, but the cost is increased rendering time. For cheaper multi bounce GI, use Path Tracer as the secondary integrator instead of increasing depth. (in Unity: Bounces) |
| fgAttenuationStart | The distance where attenuation is started. There is no attenuation before this distance. This can be used to add a falloff effect to the final gather lighting. When fgAttenuationStop is set higher than 0.0 this is enabled. |
| fgAttenuationStop | Sets the distance where attenuation is stopped (fades to zero). There is zero intensity beyond this distance. To enable attenuation set this value higher than 0.0. The default value is 0.0. |
| fgFalloffExponent | This can be used to adjust the rate by which lighting falls off by distance. A higher exponent gives a faster falloff. |
| fgAOInfluence | Blend the Final Gather with Ambient Occlusion. Range between 0..1. 0 means no occlusion, 1 is full occlusion. If Final Gather is used with multiple depths or with Path Tracing as Secondary GI the result can become a bit "flat". A great way to get more contrast into the lighting is to factor in a bit of ambient occlusion into the calculation. This Ambient Occlusion algorithm affects only final gather calculations. The Ambient Occlusion exposed in the Lightmapping window is calculated differently - by a separate, geometry-only pass. |
| fgAOMaxDistance | Max distance for the occlusion rays. Beyond this distance a ray is considered to be unoccluded. Can be used to avoid full occlusion for closed scenes such as rooms or to limit the AO contribution to creases. |
| fgAOContrast | Can be used to adjust the contrast for ambient occlusion. |
| fgAOScale | A scaling of the occlusion values. Can be used to increase or decrease the shadowing effect. |
Path Tracer
Use path tracing to get fast multi bounce global illumination. It should not be used as primary integrator for baking since the results are quite noisy which does not look good in light maps. It can be used as primary integrator to adjust the settings, to make sure the cache spacing and accuracy is good. The intended usage is to have it set as secondary integrator and have single bounce final gather as primary integrator. Accuracy and Point Size can be adjusted to make sure that the cache is sufficiently fine grained.
| GISettings | |
| ptAccuracy | Sets the number of paths that are traced for each sample element (pixel, texel or vertex). For preview renderings, a low value like 0.5 to 0.1 can be used. This means that 1/2 to 1/10 of the pixels will generate a path. For production renderings values above 1.0 may be used, if necessary to get good quality. |
| ptPointSize | Sets the maximum distance between the points in the path tracer cache. If set to 0 a value will be calculated automatically based on the size of the scene. The automatic value will be printed out during rendering, which is a good starting value if the point size needs to be adjusted. |
| ptCacheDirectLight | When this is enabled the path tracer will also cache direct lighting from light sources. This increases performance since fewer direct light calculations are needed. It gives an approximate result, and hence can affect the quality of the lighting. For instance indirect light bounces from specular highlights might be lost. |
| ptCheckVisibility | Turn this on to reduce light leakage through walls. When points are collected to interpolate between, some of them can be located on the other side of geometry. As a result light will bleed through the geometry. To prevent this Beast can reject points that are not visible. Note: If using this turn off light leakage reduction for Final Gather. |
Frame Settings (<FrameSettings> element)
Allow to control the amount of threads Beast uses and also the gamma correction of the input and output.
| FrameSettings | |
| inputGamma | Keep at 1, as this setting is set appropriately per texture. |
Surface Transfer (<SurfaceTransferSettings> element)
SurfaceTransferSettings are used to allow for transferring the lighting from LOD0 (the level of detail that is shown when the camera is close to an object) to LOD's with lower fidelity. Keep the settings at their defaults.
Page last updated: 2012-11-01LightmappingUV
Unity will use UV2 for lightmaps, if the channel is present. Otherwise it will use primary UVs.
Unity can unwrap your mesh for you to generate lightmap UVs. Just use the Generate Lightmap UVs setting in Mesh Import Settings.
Advanced Options for Generate Lightmap UVs:
| Pack Margin | The margin between neighboring patches, assuming the mesh will take entire 1024x1024 lightmap measured in pixels. That has great effect: to allow filtering, Lightmap will contain lighting information in texels near patch border. So to avoid light bleeding when applying Lightmap there should be some margin between patches. |
| Hard Angle | The angle between neighboring triangles, after which the edge between them will be considered hard edge and seam will be created. If you set it to 180 degrees all edges will be considered smooth: this is useful for organic models. The default value 88 degrees: this is useful for mechanical models |
| Angle Error | Maximum possible deviation of UVs angles from source geometry angles, in percentage. Basically it controls how similar triangles in uv space will be to triangles in original geometry (the value, the more similar triangles will be). Usually you wants it pretty low to avoid artifacts when applying Lightmap. Default is 8 percent. (This value goes from 0 to 100) |
| Area Error | Maximum possible deviation of UVs areas from source geometry areas, in percentage. Basically it controls how good relative triangle areas are preserved. Usually that is not very critical, and moving that up can allow to create less patches; although you should recheck that distortion do not deteriorate Lightmap quality, as that way triangles may have different resolution. Default is 15 percent. (This value goes from 0 to 100) |
If you prefer to provide your own UVs for lightmapping, remember that a good UV set for lightmapping:
- Is contained within the [0,1]x[0,1] space
- Has no overlapping faces.
- Has low angle distortion, that is deviation of angles in UVs and in source geometry.
- Has low area distortion, that is, relative scale of triangles is mostly preserved, unless you really want some areas to have bigger Lightmap Resolution.
- Has enough margin between individual patches.
Some examples of the hints suggested above:
Angle distortion
These screenshots were made for equal resolution, but with different uvs. Look at artefacts, and how the shape of light was slightly changed. There are only 4 triangles, actually, so shape distortion can be far uglier.

Area distortion
There are 2 spotlight with same parameters, the difference being only pointing to areas with different lightmap resolution, due to relative triangle scale being not preserved

LightProbes
Although lightmapping adds greatly to the realism of a scene, it has the disadvantage that non-static objects in the scene are less realistically rendered and can look disconnected as a result. It isn't possible to calculate lightmapping for moving objects in real time but it is possible to get a similar effect using light probes. The idea is that the lighting is sampled at strategic points in the scene, denoted by the positions of the probes. The lighting at any position can then be approximated by interpolating between the samples taken by the nearest probes. The interpolation is fast enough to be used during gameplay and helps avoid the disconnection between the lighting of moving objects and static lightmapped objects in the scene.
Adding Light probes
The Light Probe Group component (menu: ) can be added to any available object in the scene. The inspector can be used to add new probes to the group. The probes appear in the scene as yellow spheres which can be positioned in the same manner as GameObjects. Selected probes can also be duplicated with the usual keyboard shortcut (ctrl+d/cmd+d).

Choosing Light Probe positions
Remember to place probes where you want to sample light or sample darkness. The probes need to form a volume within the scene for the space subdivision to work properly.
The simplest approach to positioning is to arrange them in a regular 3D grid pattern. While this setup is simple and effective, it is likely to consume a lot of memory (each light probe is essentially a spherical, panoramic HDR image of the view from the sample point). It is worth noting that probes are only needed for regions that players, NPCs or other dynamic objects can actually move to. Also, since lighting conditions are interpolated for positions between probes, it is not necessary to use lots of them across areas where the light doesn't change very much. For example, a large area of uniform shadow would not need a large number of probes and neither would a brightly lit area far away from reflective objects. Probes are generally needed where the lighting conditions change abruptly, for instance at the edge of a shadow area or in places where pieces of scenery have different colors.
In some cases, the infrastructure of the game can be useful in choosing light probe positions. For example, a racing game typically uses waypoints around the track for AI and other purposes. These are likely to be good candidates for probe positions and it would likely be straightforward to set these positions from an editor script. Similarly, navigation meshes typically define the areas that can be reached by players and these also lend themselves to automated positioning of probes.
Here light probes have been baked over surfaces where our characters can walk on, but only where there are interesting lighting changes to capture:

Flat 2D levels
As it is now, the light probe system can't bake a completely flat probe cloud. So even if all your characters move only on a plane, you still have to take care to position at least some probes in a higher layer, so that a volume is formed and interpolation can work properly.

Good: This is the original probe placement. The characters can move up the ramps and up onto the boxes, so it's good to sample lighting up there as well.

Good: Here we assume the characters can only move on the plane. Still, there's a couple of probes placed a little bit higher, so that a volume is formed and thin cell are avoided.

Bad: The probes are placed too flat, which creates really long and thin cells and produces unintuitive interpolation results.
Using Light Probes
To allow a mesh to receive lighting from the probe system, you should enable the Use Light Probes option on its Mesh Renderer:


The probe interpolation requires a point in space to represent the position of the mesh that is receiving light. By default, the centre of the mesh's bounding box is used but it is possible to override this by dragging a Transform to the Mesh Renderer's Light Probe Anchor property (this Transform's position will be used as the interpolation point instead). This may be useful when an object contains two separate adjoining meshes; if both meshes are lit individually according to their bounding box positions then the lighting will be discontinuous at the place where they join. This can be prevented by using the same Transform (for example the parent or a child object) as the interpolation point for both Mesh Renderers.
When an object using light probes is the active selected object in the Light Probes Scene View mode, its interpolated probe will be rendered on top of it for preview. The interpolated probe is the one used for rendering the object and is connected with 4 thin blue lines (3 when outside of the probe volume) to the probes it is being interpolated between:

Dual Lightmaps vs. Single Lightmaps mode
In Single Lightmaps mode all static lighting (including lights set to 'Auto' lightmapping mode) is baked into the light probes.
In Dual Lightmaps mode light probes will store lighting in the same configuration as 'Near' lightmaps, i.e. full illumination from sky lights, emissive materials, area lights and 'Baked Only' lights, but only indirect illumination from 'Auto' lights. Thanks to that the object can be lit in real-time with the 'Auto' lights and take advantage of dynamic elements such as real-time shadows, but at the same time receive indirect lighting added to the scene by these lights.
Page last updated: 2012-10-16Occlusion Culling
Occlusion Culling is a feature that disables rendering of objects when they are not currently seen by the camera because they are obscured by other objects. This does not happen automatically in 3D computer graphics since most of the time objects farthest away from the camera are drawn first and closer objects are drawn over the top of them (this is called "overdraw"). Occlusion Culling is different from Frustum Culling. Frustum Culling only disables the renderers for objects that are outside the camera's viewing area but does not disable anything hidden from view by overdraw. Note that when you use Occlusion Culling you will still benefit from Frustum Culling.

The scene rendered without Occlusion Culling

The same scene rendered with Occlusion Culling
The occlusion culling process will go through the scene using a virtual camera to build a hierarchy of potentially visible sets of objects. This data is used at runtime by each camera to identify what is visible and what is not. Equipped with this information, Unity will ensure only visible objects get sent to be rendered. This reduces the number of draw calls and increases the performance of the game.
The data for occlusion culling is composed of cells. Each cell is a subdivision of the entire bounding volume of the scene. More specifically the cells form a binary tree. Occlusion Culling uses two trees, one for View Cells (Static Objects) and the other for Target Cells (Moving Objects). View Cells map to a list of indices that define the visible static objects which gives more accurate culling results for static objects.
It is important to keep this in mind when creating your objects because you need a good balance between the size of your objects and the size of the cells. Ideally, you shouldn't have cells that are too small in comparison with your objects but equally you shouldn't have objects that cover many cells. You can sometimes improve the culling by breaking large objects into smaller pieces. However, you can still merge small objects together to reduce draw calls and, as long as they all belong to the same cell, occlusion culling will not be affected. The collection of cells and the visibility information that determines which cells are visible from any other cell is known as a PVS (Potentially Visible Set).
Setting up Occlusion Culling
In order to use Occlusion Culling, there is some manual setup involved. First, your level geometry must be broken into sensibly sized pieces. It is also helpful to lay out your levels into small, well defined areas that are occluded from each other by large objects such as walls, buildings, etc. The idea here is that each individual mesh will be turned on or off based on the occlusion data. So if you have one object that contains all the furniture in your room then either all or none of the entire set of furniture will be culled. This doesn't make nearly as much sense as making each piece of furniture its own mesh, so each can individually be culled based on the camera's view point.
You need to tag all scene objects that you want to be part of the occlusion to in the Inspector. The fastest way to do this is to multi-select the objects you want to be included in occlusion calculations, and mark them as and .

Marking an object for Occlusion
When should I use ? Transparent objects that do not occlude, as well as small objects that are unlikely to occlude other things, should be marked as , but not . This means they will be considered in occlusion by other objects, but will not be considered as occluders themselves, which will help reduce computation.
Occlusion Culling Window
For most operations dealing with Occlusion Culling, we recommend you use the Occlusion Culling Window ()
In the Occlusion Culling Window, you can work with occluder meshes, and Occlusion Areas.
If you are in the tab of the and have some Mesh Renderer selected in the scene, you can modify the relevant Static flags:

Occlusion Culling Window for a Mesh Renderer
If you are in the tab of the and have an Occlusion Area selected, you can work with relevant OcclusionArea properties (for more details go to the Occlusion Area section)

Occlusion Culling Window for the Occlusion Area
NOTE: By default if you don't create any occlusion areas, occlusion culling will be applied to the whole scene.
NOTE: Whenever your camera is outside occlusion areas, occlusion culling will not be applied. It is important to set up your Occlusion Areas to cover the places where the camera can potentially be, but making the areas too large, incurs a cost during baking.
Occlusion Culling - Bake

Occlusion culling inspector bake tab.
Properties
| Technique | Select between the types of occlusion culling baking |
| PVS only | Only static objects will be occlusion culled. Dynamic objects will be culled based on the view frustrum only. this technique has the smallest overhead on the CPU, but since dynamic objects are not culled, it is only recommended for games with few moving objects and characters. Since all visibility is precomputed, you cannot open or close portals at runtime. |
| PVS and dynamic objects | Static objects are culled using precomputed visibility. Dynamic objects are culled using portal culling. this technique is a good balance between runtime overhead and culling efficiency. Since all visibility is precomputed, you cannot open or close a portal at runtime |
| Automatic Portal Generation | Portals are generated automatically. Static and dynamic objects are culled through portals. This allows you to open and close portals at runtime. This technique will cull objects most accurately, but also has the most performance overhead on the CPU. |
| View Cell Size | Size of each view area cell. A smaller value produces more accurate occlusion culling. The value is a tradeoff between occlusion accuracy and storage size |
| Near Clip Plane | Near clip plane should be set to the smallest near clip plane that will be used in the game of all the cameras. |
| Far Clip Plane | Far Clip Plane used to cull the objects. Any object whose distance is greater than this value will be occluded automatically.(Should be set to the largest far clip planed that will be used in the game of all the cameras) |
| Memory limit | This is a hint for the PVS-based baking, not available in Automatic Portal Generation mode |
When you have finished tweaking these values you can click on the Button to start processing the Occlusion Culling data. If you are not satisfied with the results, you can click on the button to remove previously calculated data.
Occlusion Culling - Visualization

Occlusion culling inspector visualization tab.
The near and far planes define a virtual camera that is used to calculate the occlusion data. If you have several cameras with different near or far planes, you should use the smallest near plane and the largest far plane distance of all cameras for correct inclusion of objects.
All the objects in the scene affect the size of the bounding volume so try to keep them all within the visible bounds of the scene.
When you're ready to generate the occlusion data, click the button. Remember to choose the in the tab. Lower values make the generation quicker and less precise, higher values are to be used for production quality closer to release.
Bear in mind that the time taken to build the occlusion data will depend on the cell levels, the data size and the quality you have chosen. Unity will show the status of the PVS generation at the bottom of the main window.
After the processing is done, you should see some colorful cubes in the View Area. The colored areas are regions that share the same occlusion data.
Click on if you want to remove all the pre-calculated data for Occlusion Culling.
Occlusion Area(Unity Proのみ)
オクルージョンカリングを適用するためにはOcclusion Areaを作成し移動オブジェクトが配置される場所となるようサイズを修正する必要があります(当然、移動するオブジェクトをStaticとすることはできません)Occlusion Areaを作成するためには、空のゲームオブジェクトにOcclusion Area コンポーネントを追加します。(メニューで を選択)
Occlusion Areaを作成した後IsTargetVolumeチェックボックスをオンにして動くオブジェクトをオクルージョン(無視)します。

移動するオブジェクトのOcclusion Area プロパティ
| Size | Occlusion Areaのサイズを定義 |
| Center | Occlusion Areaの中心を設定します。デフォルトでは、(0,0,0)であり、ボックスの中央に位置しています。 |
| Is View Volume | カメラを配置できる場所を定義します。Occlusion Area内にあるStaticオブジェクトをオクルージョンするときオンにします。 |
| Is Target Volume | 移動するオブジェクトをオクルージョンしたいときに選択すること |
| Target Resolution | エリア内でオクルージョンカリングの精度を決定します。これは、Occlusion Area内のセルのサイズに影響します。注意:これはターゲットエリアのみに影響を与えます。 |
| Low | 処理時間は短くなるものの、精緻さは落ちます |
| Medium | オクルージョンカリングと処理時間のバランスをとります |
| High | オクルージョンカリングと処理時間のバランスをとります。 |
| Very High | High設定で精緻さが不十分な場合に使用しますが、処理時間がさらに長くなることに留意する必要があります |
| Extremely High | 動くオブジェクトでほぼ完全に精緻なオクルージョンカリングが求められる場合に限り使用してください。注意:大量の処理時間を浪費します。 |
Occlusion Areaを作成した後、箱をセルに分割する方法を確認する必要があります。Occlusion Areaがどのように算出されたか確認するためにはEditを選択し、オクルージョンカリングプレビューパネル( Occlusion Culling Preview Panel)でビュー(View)をトグルして選択して下さい。

生成されたオクルージョンのテスト
オクルージョンをセットアップした後、オクルージョンカリングプレビューパネル( Occlusion Culling Preview Panel)上でOcclusion Cullingをオンにして、メインカメラをシーンビューのなかで移動することでテストを行うことが出来ます。

シーンビューのオクルージョンカリング(Occlusion Culling)モード
メインカメラを移動するのに合わせ(Playモードでない場合も)、オブジェクトが無効(disable)となることを確認出来ますオクルージョンデータでエラーが出ないことを確かめる必要があります。移動するのにあわせオブジェクトがビューに割り込んできたらエラーであると認識できますこの場合、エラーを修正するためにオプションとしては画面解像度を変更する(Target Volumesを変更していた場合)か、オブジェクトを移動してエラーが出ないようにする方法があります。オクルージョンでデバッグをするためにはメインカメラを問題が発生しているポジションに移動してスポットでチェックを行うことが出来ます。
処理が完了するとビューエリアでカラフルなキューブを確認することが出来ます。青いキューブはTarget Volumes(Target Volume)のセル分割を意味しています。白いキューブはView Volumes(View Volume)のセル分割を意味しています。もしパラメータが正しく設定されていればいくつかのオブジェクトはレンダリングされません。その理由はカメラのビュー範囲(円錐状)の外にあるか、他のオブジェクトの配置により無視されているか、のいずれかになります。
オクルージョンが完了した後、シーン上で何もオクルージョンされない場合、オブジェクトをより小さなパーツに分解しそのオブジェクトがセルの中に納まるサイズに小分けします。
オクルージョンポータル
実行時にオープン・クローズが可能であるオクルージョン プリミティブを作成するには、Unityではオクルージョンポータルを使用します。

| Open | ポータルがオープンの状態(スクリプト編集可能) |
| Center | オクルージョンエリアの中心を設定します。デフォルトでは、(0,0,0)であり、ボックスの中央に位置しています。 |
| Size | オクルージョンエリアのサイズを定義 |
CameraTricks
It is useful to understand how the camera works when designing certain visual effects or interactions with objects in the scene. This section explains the nature of the camera's view and how it can be used to enhance gameplay.
- UnderstandingFrustum
- The Size of the Frustum at a Given Distance from the Camera
- Dolly Zoom (AKA the "Trombone" Effect)
- Rays from the Camera
- Using an Oblique Frustum
- Creating an Impression of Large or Small Size
UnderstandingFrustum
Understanding the View Frustum
The word frustum refers to a solid shape that looks like a pyramid with the top cut off parallel to the base. This is the shape of the region that can be seen and rendered by a perspective camera. The following thought experiment should help to explain why this is the case.
Imagine holding a straight rod (a broom handle or a pencil, say) end-on to a camera and then taking a picture. If the rod were held in the centre of the picture, perpendicular to the camera lens, then only its end would be visible as a circle on the picture; all other parts of it would be obscured. If you moved it upward, the lower side would start to become visible but you could hide it again by angling the rod upward. If you continued moving the rod up and angling it further upward, the circular end would eventually reach the top edge of the picture. At this point, any object above the line traced by the rod in world space would not be visible on the picture.

The rod could just as easily be moved and rotated left, right, or down or any combination of horizontal and vertical. The angle of the "hidden" rod simply depends on its distance from the centre of the screen in both axes.
The meaning of this thought experiment is that any point in a camera's image actually corresponds to a line in world space and only a single point along that line is visible in the image. Everything behind that position on the line is obscured.
The outer edges of the image are defined by the diverging lines that correspond to the corners of the image. If those lines were traced backwards towards the camera, they would all eventually converge at a single point. In Unity, this point is located exactly at the camera's transform position and is known as the centre of perspective. The angle subtended by the lines converging from the top and bottom centres of the screen at the centre of perspective is called the field of view (often abbreviated to FOV).
As stated above, anything that falls outside the diverging lines at the edges of the image will not be visible to the camera, but there are also two other restrictions on what it will render. The near and far clipping planes are parallel to the camera's XY plane and each set at a certain distance along its centre line. Anything closer to the camera than the near clipping plane and anything farther away than the far clipping plane will not be rendered.

The diverging corner lines of the image along with the two clipping planes define a truncated pyramid - the view frustum.
Page last updated: 2011-09-06FrustumSizeAtDistance
A cross-section of the view frustum at a certain distance from the camera defines a rectangle in world space that frames the visible area. It is sometimes useful to calculate the size of this rectangle at a given distance, or find the distance where the rectangle is a given size. For example, if a moving camera needs to keep an object (such as the player) completely in shot at all times then it must not get so close that part of that object is cut off.
The height of the frustum at a given distance (both in world units) can be obtained with the following formula:-
var frustumHeight = 2.0 * distance * Mathf.Tan(camera.fieldOfView * 0.5 * Mathf.Deg2Rad);
...and the process can be reversed to calculate the distance required to give a specified frustum height:-
var distance = frustumHeight * 0.5 / Mathf.Tan(camera.fieldOfView * 0.5 * Mathf.Deg2Rad);
It is also possible to calculate the FOV angle when the height and distance are known:-
var camera.fieldOfView = 2 * Mathf.Atan(frustumHeight * 0.5 / distance) * Mathf.Rad2Deg;
Each of these calculations involves the height of the frustum but this can be obtained from the width (and vice versa) very easily:-
var frustumWidth = frustumHeight * camera.aspect; var frustumHeight = frustumWidth / camera.aspect;Page last updated: 2011-09-06
DollyZoom
Dolly Zoom is the well-known visual effect where the camera simultaneously moves towards a target object and zooms out from it. The result is that the object appears roughly the same size but all the other objects in the scene change perspective. Done subtly, dolly zoom has the effect of highlighting the target object, since it is the only thing in the scene that isn't shifting position in the image. Alternatively, the zoom can be deliberately performed quickly to create the impression of disorientation.
An object that just fits within the frustum vertically will occupy the whole height of the view as seen on the screen. This is true whatever the object's distance from the camera and whatever the field of view. For example, you can move the camera closer to the object but then widen the field of view so that the object still just fits inside the frustum's height. That particular object will appear the same size onscreen but everything else will change size as the distance and FOV change. This is the essence of the dolly zoom effect.

Creating the effect in code is a matter of saving the height of the frustum at the object's position at the start of the zoom. Then, as the camera moves, its new distance is found and the FOV adjusted to keep it the same height at the object's position. This can be accomplished with the following code:-
var target: Transform;
private var initHeightAtDist: float;
private var dzEnabled: boolean;
// Calculate the frustum height at a given distance from the camera.
function FrustumHeightAtDistance(distance: float) {
return 2.0 * distance * Mathf.Tan(camera.fieldOfView * 0.5 * Mathf.Deg2Rad);
}
// Calculate the FOV needed to get a given frustum height at a given distance.
function FOVForHeightAndDistance(height: float, distance: float) {
return 2 * Mathf.Atan(height * 0.5 / distance) * Mathf.Rad2Deg;
}
// Start the dolly zoom effect.
function StartDZ() {
var distance = Vector3.Distance(transform.position, target.position);
initHeightAtDist = FrustumHeightAtDistance(distance);
dzEnabled = true;
}
// Turn dolly zoom off.
function StopDZ() {
dzEnabled = false;
}
function Start() {
StartDZ();
}
function Update () {
if (dzEnabled) {
// Measure the new distance and readjust the FOV accordingly.
var currDistance = Vector3.Distance(transform.position, target.position);
camera.fieldOfView = FOVForHeightAndDistance(initHeightAtDist, currDistance);
}
// Simple control to allow the camera to be moved in and out using the up/down arrows.
transform.Translate(Input.GetAxis("Vertical") * Vector3.forward * Time.deltaTime * 5);
}
Page last updated: 2011-09-06
CameraRays
In the section Understanding the View Frustum, it was explained that any point in the camera's view corresponds to a line in world space. It is sometimes useful to have a mathematical representation of that line and Unity can provide this in the form of a Ray object. The Ray always corresponds to a point in the view, so the Camera class provides the ScreenPointToRay and ViewportPointToRay functions. The difference between the two is that ScreenPointToRay expects the point to be provided as a pixel coordinate, while ViewportPointToRay takes normalized coordinates in the range 0..1 (where 0 represents the bottom or left and 1 represents the top or right of the view). Each of these functions returns a Ray which consists of a point of origin and a vector which shows the direction of the line from that origin. The Ray originates from the near clipping plane rather than the Camera's transform.position point.
Raycasting
The most common use of a Ray from the camera is to perform a raycast out into the scene. A raycast sends an imaginary "laser beam" along the ray from its origin until it hits a collider in the scene. Information is then returned about the object and the point that was hit in a RaycastHit object. This is a very useful way to locate an object based on its onscreen image. For example, the object at the mouse position can be determined with the following code:-
var hit: RaycastHit;
var ray: Ray = camera.ScreenPointToRay(Input.mousePosition);
if (Physics.Raycast(ray, hit)) {
var objectHit: Transform = hit.transform;
// Do something with the object that was hit by the raycast.
}
Moving the Camera Along a Ray
It is sometimes useful to get a ray corresponding to a screen position and then move the camera along that ray. For example, you may want to allow the user to select an object with the mouse and then zoom in on it while keeping it "pinned" to the same screen position under the mouse (this might be useful when the camera is looking at a tactical map, say). The code to do this is fairly straightforward:-
var zooming: boolean;
var zoomSpeed: float;
if (zooming) {
var ray: Ray = camera.ScreenPointToRay(Input.mousePosition);
zoomDistance = zoomSpeed * Input.GetAxis("Vertical") * Time.deltaTime;
camera.transform.Translate(ray.direction * zoomDistance, Space.World);
}
Page last updated: 2011-09-06
ObliqueFrustum
By default, the view frustum is arranged symmetrically around the camera's centre line but it doesn't necessarily need to be. The frustum can be made "oblique", which means that one side is at a smaller angle to the centre line than the opposite side. The effect is rather like taking a printed photograph and cutting one edge off. This makes the perspective on one side of the image seem more condensed giving the impression that the viewer is very close to the object visible at that edge. An example of how this can be used is a car racing game where the frustum might be flattened at its bottom edge. This would make the viewer seem closer to the road, accentuating the feeling of speed.

While the camera class doesn't have functions to set the obliqueness of the frustum, it can be done quite easily by altering the projection matrix:-
function SetObliqueness(horizObl: float, vertObl: float;) {
var mat: Matrix4x4 = camera.projectionMatrix;
mat[0, 2] = horizObl;
mat[1, 2] = vertObl;
camera.projectionMatrix = mat;
}
Mercifully, it is not necessary to understand how the projection matrix works to make use of this. The horizObl and vertObl values set the amount of horizontal and vertical obliqueness, respectively. A value of zero indicates no obliqueness. A positive value shifts the frustum rightwards or upwards, thereby flattening the left or bottom side. A negative value shifts leftwards or downwards and consequently flattens the right or top side of the frustum. The effect can be seen directly if this script is added to a camera and the game is switched to the scene view while the game runs; the wireframe depiction of the camera's frustum will change as you vary the values of horizObl and vertObl in the inspector. A value of 1 or -1 in either variable indicates that one side of the frustum is completely flat against the centreline. It is possible although seldom necessary to use values outside this range.
Page last updated: 2011-09-06ImpressionOfSize
From the graphical point of view, the units of distance in Unity are arbitrary and don't correspond to real world measurements. Although this gives flexibility and convenience for design, it is not always easy to convey the intended size of the object. For example, a toy car looks different to a full size car even though it may be an accurate scale model of the real thing.
A major element in the impression of an object's size is the way the perspective changes over the object's length. For example, if a toy car is viewed from behind then the front of the car will only be a short distance farther away than the back. Since the distance is small, perspective will have relatively little effect and so the front will appear little different in size to the back. With a full size car, however, the front will be several metres farther away from the camera than the back and the effect of perspective will be much more noticeable.
For an object to appear small, the lines of perspective should diverge only very slightly over its depth. You can achieve this by using a narrower field of view than the default 60 and moving the camera farther away to compensate for the increased onscreen size. Conversely, if you want to make an object look big, use a wide FOV and move the camera in close. When these perspective alterations are used with other obvious techniques (like looking down at a "small" object from higher-than-normal vantage point) the result can be quite convincing.
Page last updated: 2011-09-06Loading Resources at Runtime
In some situations, it is useful to make an asset available to a project without loading it in as part of a scene. For example, there may be a character or other object that can appear in any scene of the game but which will only be used infrequently (this might be a "secret" feature, an error message or a highscore alert, say). Furthermore, you may even want to load assets from a separate file or URL to reduce initial download time or allow for interchangeable game content.
Unity supports Resource Folders in the project to allow content to be supplied in the main game file yet not be loaded until requested. In Unity Pro, Unity iOS Advanced and Unity Android Advanced, you can also create Asset Bundles. These are files completely separate from the main game file which contain assets to be accessed by the game on demand from a file or URL.
Asset Bundles (Unity Pro-only/iOS Advanced/Android Advanced licenses only)
An Asset Bundle is an external collection of assets. You can have many Asset Bundles and therefore many different external collections of assets. These files exist outside of the built Unity player, usually sitting on a web server for end-users to access dynamically.
To build an Asset Bundle, you call BuildPipeline.BuildAssetBundle() from inside an Editor script. In the arguments, you specify an array of Objects to be included in the built file, along with some other options. This will build a file that you can later load dynamically in the runtime by using AssetBundle.Load().
Resource Folders
Resource Folders are collections of assets that are included in the built Unity player, but are not necessarily linked to any GameObject in the Inspector.
To put anything into a Resource Folder, you simply create a new folder inside the Project View, and name the folder "Resources". You can have multiple Resource Folders organized differently in your Project. Whenever you want to load an asset from one of these folders, you call Resources.Load().
If your target deployable is a Streaming Web Player, you can define which scene will include everything in your Resource Folders. You do this in the Player Settings, accessible via . Stream queue is determined by Build Settings' scene order.
Note:
All assets found in the Resources folders and their dependencies are stored in a file called resources.assets. If an asset is already used by another level it is stored in the .sharedAssets file for that level. The setting determines the level at which the resources.assets will be collected and included in the build.
If a level prior to "First streamed Level" is including an asset in a Resource folder, the asset will be stored in assets for that level. if it is included afterwards, the level will reference the asset from the "resources.assets" file.
Only assets that are in the Resources folder can be accessed through Resources.Load. However many more assets might end up in the "resources.assets" file since they are dependencies. (For example a Material in the Resources folder might reference a Texture outside of the Resources folder)
Resource Unloading
You can unload resources of an AssetBundle by calling AssetBundle.Unload(). If you pass true for the unloadAllLoadedObjects parameter, both the objects held internally by the AssetBundle and the ones loaded from the AssetBundle using AssetBundle.Load() will be destroyed and memory used by the bundle will be released.
Sometimes you may prefer to load an AssetBundle, instantiate the objects desired and release the memory used up by the bundle while keeping the objects around. The benefit is that you free up memory for other tasks, for instance loading another AssetBundle. In this scenario you would pass false as the parameter. After the bundle is destroyed you will not be able to load objects from it any more.
If you want to destroy scene objects loaded using Resources.Load() prior to loading another level, call Object.Destroy() on them. To release assets, use Resources.UnloadUnusedAssets().
Page last updated: 2012-11-30Modifying Source Assets Through Scripting
Automatic Instantiation
Usually when you want to make a modification to any sort of game asset, you want it to happen at runtime and you want it to be temporary. For example, if your character picks up an invincibility power-up, you might want to change the shader of the material for the player character to visually demonstrate the invincible state. This action involves modifying the material that's being used. This modification is not permanent because we don't want the material to have a different shader when we exit Play Mode.
However, it is possible in Unity to write scripts that will permanently modify a source asset. Let's use the above material example as a starting point.
To temporarily change the material's shader, we change the shader property of the material component.
private var invincibleShader = Shader.Find ("Specular");
function StartInvincibility {
renderer.material.shader = invincibleShader;
}
When using this script and exiting Play Mode, the state of the material will be reset to whatever it was before entering Play Mode initially. This happens because whenever renderer.material is accessed, the material is automatically instantiated and the instance is returned. This instance is simultaneously and automatically applied to the renderer. So you can make any changes that your heart desires without fear of permanence.
Direct Modification
IMPORTANT NOTE
The method presented below will modify actual source asset files used within Unity. These modifications are not undoable. Use them with caution.
Now let's say that we don't want the material to reset when we exit play mode. For this, you can use renderer.sharedMaterial. The sharedMaterial property will return the actual asset used by this renderer (and maybe others).
The code below will permanently change the material to use the Specular shader. It will not reset the material to the state it was in before Play Mode.
private var invincibleShader = Shader.Find ("Specular");
function StartInvincibility {
renderer.sharedMaterial.shader = invincibleShader;
}
As you can see, making any changes to a sharedMaterial can be both useful and risky. Any change made to a sharedMaterial will be permanent, and not undoable.
Applicable Class Members
The same formula described above can be applied to more than just materials. The full list of assets that follow this convention is as follows:
- Materials: renderer.material and renderer.sharedMaterial
- Meshes: meshFilter.mesh and meshFilter.sharedMesh
- Physic Materials: collider.material and collider.sharedMaterial
Direct Assignment
If you declare a public variable of any above class: Material, Mesh, or Physic Material, and make modifications to the asset using that variable instead of using the relevant class member, you will not receive the benefits of automatic instantiation before the modifications are applied.
Assets that are not automatically instantiated

Desktop
There are two different assets that are never automatically instantiated when modifying them.
Any modifications made to these assets through scripting are always permanent, and never undoable. So if you're changing your terrain's heightmap through scripting, you'll need to account for instantiating and assigning values on your own. Same goes for Textures. If you change the pixels of a texture file, the change is permanent.

iOS
Texture2D assets are never automatically instantiated when modifying them. Any modifications made to these assets through scripting are always permanent, and never undoable. So if you change the pixels of a texture file, the change is permanent.

Android
Texture2D assets are never automatically instantiated when modifying them. Any modifications made to these assets through scripting are always permanent, and never undoable. So if you change the pixels of a texture file, the change is permanent.
Generating Mesh Geometry Procedurally
メッシュ クラスは、スクリプトにオブジェクトのメッシュ ジオメトリへのアクセス権を与え、メッシュがランタイム時に作成および修正できるようにします。 この手法は、グラフィック効果 (オブジェクトの引き伸ばしまたは押しつぶしなど) に便利ですが、レベル設計や最適化にも便利です。 以降の項では、API の探索と例と共に、メッシュがどのように作成されるかの基本的な詳細を説明します。
Page last updated: 2012-11-09Anatomy of a Mesh
A mesh consists of triangles arranged in 3D space to create the impression of a solid object. A triangle is defined by its three corner points or vertices. In the Mesh class, the vertices are all stored in a single array and each triangle is specified using three integers that correspond to indices of the vertex array. The triangles are also collected together into a single array of integers; the integers are taken in groups of three from the start of this array, so elements 0, 1 and 2 define the first triangle, 3, 4 and 5 define the second, and so on. Any given vertex can be reused in as many triangles as desired but there are reasons why you may not want to do this, as explained below.
Lighting and Normals
The triangles are enough to define the basic shape of the object but extra information is needed to display the mesh in most cases. To allow the object to be shaded correctly for lighting, a normal vector must be supplied for each vertex. A normal is a vector that points outward, perpendicular to the mesh surface at the position of the vertex it is associated with. During the shading calculation, each vertex normal is compared with the direction of the incoming light, which is also a vector. If the two vectors are perfectly aligned, then the surface is receiving light head-on at that point and the full brightness of the light will be used for shading. A light coming exactly side-on to the normal vector will give no illumination to the surface at that position. Typically, the light will arrive at an angle to the normal and so the shading will be somewhere in between full brightness and complete darkness, depending on the angle.

Since the mesh is made up of triangles, it may seem that the normals at corners will simply be perpendicular to the plane of their triangle. However, normals are actually interpolated across the triangle to give the surface direction of the intermediate positions between the corners. If all three normals are pointing in the same direction then the triangle will be uniformly lit all over. The effect of having separate triangles uniformly shaded is that the edges will be very crisp and distinct. This is exactly what is required for a model of a cube or other sharp-edged solid but the interpolation of the normals can be used to create smooth shading to approximate a curved surface.
To get crisp edges, it is necessary to double up vertices at each edge since both of the two adjacent triangles will need their own separate normals. For curved surfaces, vertices will usually be shared along edges but a bit of intuition is often required to determine the best direction for the shared normals. A normal might simply be the average of the normals of the planes of the surrounding triangles. However, for an object like a sphere, the normals should just be pointing directly outward from the sphere's centre.
By calling Mesh.RecalculateNormals, you can get Unity to work out the normals' directions for you by making some assumptions about the "meaning" of the mesh geometry; it assumes that vertices shared between triangles indicate a smooth surface while doubled-up vertices indicate a crisp edge. While this is not a bad approximation in most cases, RecalculateNormals will be tripped up by some texturing situations where vertices must be doubled even though the surface is smooth.
Texturing
In addition to the lighting, a model will also typically make use of texturing to create fine detail on its surface. A texture is a bit like an image printed on a stretchable sheet of rubber. For each mesh triangle, a triangular area of the texture image is defined and that texture triangle is stretched and "pinned" to fit the mesh triangle. To make this work, each vertex needs to store the coordinates of the image position that will be pinned to it. These coordinates are two dimensional and scaled to the 0..1 range (0 means the bottom/left of the image and 1 means the right/top). To avoid confusing these coordinates with the Cartesian coordinates of the 3D world, they are referred to as U and V rather than the more familiar X and Y, and so they are commonly called UV coordinates.
Like normals, texture coordinates are unique to each vertex and so there are situations where you need to double up vertices purely to get different UV values across an edge. An obvious example is where two adjacent triangles use discontinuous parts of the texture image (eyes on a face texture, say). Also, most objects that are fully enclosed volumes will need a "seam" where an area of texture wraps around and joins together. The UV values at one side of the seam will be different from those at the other side.
Page last updated: 2011-07-15Using the Mesh Class
メッシュ クラスは、オブジェクトのメッシュ ジオメトリに対する基本的なスクリプト インターフェースです。 これは、頂点、三角形、法線およびテクスチャ座標を表す配列を使用し、また、メッシュ生成を支援するその他多くの便利なプロパティおよび関数を提供します。
オブジェクトのメッシュへのアクセス
メッシュ データは、メッシュ フィルタ コンポーネントを使用してオブジェクトに追加されます (このオブジェクトはまた、ジオメトリを表示させるのにメッシュ レンダラを必要とします。)。 このコンポーネントには、よく慣れた GetComponent 関数を通じてアクセスできます。
var mf: MeshFilter = GetComponent(MeshFilter); // mf. メッシュを使用して、メッシュ自体を参照します。
メッシュ データの追加
メッシュ オブジェクトには、頂点およびその関連するデータ (法線および UV 座標) および三角形データに対するプロパティがあります。 頂点は任意の順番で提供されますが、法線および UV の配列は、インデックスがすべて頂点に対応するように順序付けされる必要があります (つまり、法線の配列の要素 0 は、頂点 0 に法線を提供します)。 頂点は、オブジェクトのローカル空間における点を表す Vector3 です。 法線は、再度ローカル座標で方向を表す標準化された Vector3 になります。 UV は、Vector2 で指定されまsが、Vector2 タイプには、U や V というフィールドはないため、頭の中でこれらをそれぞれ X と Y に変換する必要があります。
三角形は、頂点配列のインデックスとして機能する整数の 3 の倍数として指定されます。 三角形を表す特殊なクラスを使用するのではなく、この配列はただの整数インデックスの簡単なリストになります。 これらは各三角形に対する 3 tのグループに分けられるので、最初の 3 つの要素は最初の三角形を定義し、次の 3 つは 2 つ目の三角形を定義します。 三角形の重要な詳細は、過度の頂点の順序になります。 開始した角に関係なく、表示されている三角形の外面を見下ろすため、過度が時計回りになるよう、配置される必要があります。
Page last updated: 2012-11-09Example - Creating a Billboard Plane
Unity comes with a Plane primitive object but a simpler plane may be useful in 2D games or GUI, and in any case makes a good starting example. A minimal plane will consist of four vertices to define the corners along with two triangles.
The first thing is to set the vertices array. We'll assume that the plane lies in the X and Y axes and let its width and height be determined by parameter variables. We'll supply the vertices in the order bottom-left, bottom-right, top-left, top-right.

var vertices: Vector3[] = new Vector3[4]; vertices[0] = new Vector3(0, 0, 0); vertices[1] = new Vector3(width, 0, 0); vertices[2] = new Vector3(0, height, 0); vertices[3] = new Vector3(width, height, 0); mesh.vertices = vertices;
(Since the Mesh data properties execute code behind the scenes, it is much more efficient to set up the data in your own array and then assign this to a property rather than access the property array element by element.)
Next come the triangles. Since we want two triangles, each defined by three integers, the triangles array will have six elements in total. Remembering the clockwise rule for ordering the corners, the lower left triangle will use 0, 2, 1 as its corner indices, while the upper right one will use 2, 3, 1.
var tri: int[] = new int[6]; // Lower left triangle. tri[0] = 0; tri[1] = 2; tri[2] = 1; // Upper right triangle. tri[3] = 2; tri[4] = 3; tri[5] = 1; mesh.triangles = tri;
A mesh with just the vertices and triangles set up will be visible in the editor but will not look very convincing since it is not correctly shaded without the normals. The normals for the flat plane are very simple - they are all identical and point in the negative Z direction in the plane's local space. With the normals added, the plane will be correctly shaded but remember that you need a light in the scene to see the effect.
var normals: Vector3[] = new Vector3[4]; normals[0] = -Vector3.forward; normals[1] = -Vector3.forward; normals[2] = -Vector3.forward; normals[3] = -Vector3.forward; mesh.normals = normals;
Finally, adding texture coordinates to the mesh will enable it to display a material correctly. Assuming we want to show the whole image across the plane, the UV values will all be 0 or 1, corresponding to the corners of the texture.
var uv: Vector2[] = new Vector2[4]; uv[0] = new Vector2(0, 0); uv[1] = new Vector2(1, 0); uv[2] = new Vector2(0, 1); uv[3] = new Vector2(1, 1); mesh.uv = uv;
The complete script might look a bit like this:-
var width: float;
var height: float;
function Start() {
var mf: MeshFilter = GetComponent(MeshFilter);
var mesh = new Mesh();
mf.mesh = mesh;
var vertices: Vector3[] = new Vector3[4];
vertices[0] = new Vector3(0, 0, 0);
vertices[1] = new Vector3(width, 0, 0);
vertices[2] = new Vector3(0, height, 0);
vertices[3] = new Vector3(width, height, 0);
mesh.vertices = vertices;
var tri: int[] = new int[6];
tri[0] = 0;
tri[1] = 2;
tri[2] = 1;
tri[3] = 2;
tri[4] = 3;
tri[5] = 1;
mesh.triangles = tri;
var normals: Vector3[] = new Vector3[4];
normals[0] = -Vector3.forward;
normals[1] = -Vector3.forward;
normals[2] = -Vector3.forward;
normals[3] = -Vector3.forward;
mesh.normals = normals;
var uv: Vector2[] = new Vector2[4];
uv[0] = new Vector2(0, 0);
uv[1] = new Vector2(1, 0);
uv[2] = new Vector2(0, 1);
uv[3] = new Vector2(1, 1);
mesh.uv = uv;
}
Note that the if the code is executed once in the Start function then the mesh will stay the same throughout the game. However, you can just as easily put the code in the Update function to allow the mesh to be changed each frame (although this will increase the CPU overhead considerably).
Page last updated: 2011-08-15StyledText
The text for GUI elements and text meshes can incorporate multiple font styles and sizes. The GUIStyle, GUIText and TextMesh classes have a Rich Text setting which instructs Unity to look for markup tags within the text. These tags are not displayed but indicate style changes to be applied to the text.
Markup format
The markup system is inspired by HTML but isn't intended to be strictly compatible with standard HTML. The basic idea is that a section of text can be enclosed inside a pair of matching tags:-
We are <b>not</b> amused
As the example shows, the tags are just pieces of text inside the "angle bracket" characters, < and >. The text inside the tag denotes its name (which in this case is just b). Note that the tag at the end of the section has the same name as the one at the start but with the slash / character added. The tags are not displayed to the user directly but are interpreted as instructions for styling the text they enclose. The b tag used in the example above applies boldface to the word "not", so the text will appear onscreen as:-
We are not amused
A marked up section of text (including the tags that enclose it) is referred to as an element.
Nested elements
It is possible to apply more than one style to a section of text by "nesting" one element inside another
We are <b><i>definitely not</i></b> amused
The i tag applies italic style, so this would be presented onscreen as
We are definitely not amused
Note the ordering of the ending tags, which is in reverse to that of the starting tags. The reason for this is perhaps clearer when you consider that the inner tags need not span the whole text of the outermost element
We are <b>absolutely <i>definitely</i> not</b> amused
which gives
We are absolutely definitely not amused
Tag parameters
Some tags have a simple all-or-nothing effect on the text but others might allow for variations. For example, the color tag needs to know which colour to apply. Information like this is added to tags by the use of parameters:-
We are <color=green>green</color> with envy
Note that the ending tag doesn't include the parameter value. Optionally, the value can be surrounded by quotation marks but this isn't required.
Supported tags
The following list describes all the styling tags supported by Unity.
b
Renders the text in boldface.
We are <b>not</b> amused
i
Renders the text in italics.
We are <i>usually</i> not amused
size
Sets the size of the text according to the parameter value, given in pixels.
We are <size=50>largely</size> unaffected
color
Sets the colour of the text according to the parameter value. The colour can be specified in the traditional HTML format
#rrggbbaa
...where the letters correspond to pairs of hexadecimal digits denoting the red, green, blue and alpha (transparency) values for the colour. For example, cyan at full opacity would be specified by
<color=#00ffffff>...
Another option is to use the name of the colour. This is easier to understand but naturally, the range of colours is limited and full opacity is always assumed.
<color=cyan>...
The available colour names are given in the table below.
| Colour name | Hex value | Swatch |
|---|---|---|
| aqua (same as cyan) | #00ffffff | ![]() |
| black | #000000ff | ![]() |
| blue | #0000ffff | ![]() |
| brown | #a52a2aff | ![]() |
| cyan (same as aqua) | #00ffffff | ![]() |
| darkblue | #0000a0ff | ![]() |
| fuchsia (same as magenta) | #ff00ffff | ![]() |
| green | #008000ff | ![]() |
| grey | #808080ff | ![]() |
| lightblue | #add8e6ff | ![]() |
| lime | #00ff00ff | ![]() |
| magenta (same as fuchsia) | #ff00ffff | ![]() |
| maroon | #800000ff | ![]() |
| navy | #000080ff | ![]() |
| olive | #808000ff | ![]() |
| orange | #ffa500ff | ![]() |
| purple | #800080ff | ![]() |
| red | #ff0000ff | ![]() |
| silver | #c0c0c0ff | ![]() |
| teal | #008080ff | ![]() |
| white | #ffffffff | ![]() |
| yellow | #ffff00ff | ![]() |
material
This is only useful for text meshes and renders a section of text with a material specified by the parameter. The value is an index into the text mesh's array of materials as shown by the inspector.
We are <material=2>texturally</material> amused
quad
This is only useful for text meshes and renders an image inline with the text. It takes parameters that specify the material to use for the image, the image height in pixels, and a further four that denote a rectangular area of the image to display. Unlike the other tags, quad does not surround a piece of text and so there is no ending tag - the slash character is placed at the end of the initial tag to indicate that it is "self-closing".
<quad material=1 size=20 x=0.1 y=0.1 width=0.5 height=0.5 />
This selects the material at position in the renderer's material array and sets the height of the image to 20 pixels. The rectangular area of image starts at given by the x, y, width and height values, which are all given as a fraction of the unscaled width and height of the texture.
Page last updated: 2012-07-01UsingDLL
Usually, scripts are kept in a project as source files and compiled by Unity whenever the source changes. However, it is also possible to compile a script to a dynamically linked library (DLL) using an external compiler. The resulting DLL can then be added to the project and the classes it contains can be attached to objects just like normal scripts.
It is generally much easier to work with scripts than DLLs in Unity. However, you may have access to third party Mono code which is supplied in the form of a DLL. When developing your own code, you may be able to use compilers not supported by Unity (F#, for example) by compiling the code to a DLL and adding it to your Unity project. Also, you may want to supply Unity code without the source (for an Asset Store product, say) and a DLL is an easy way to do this.
Creating a DLL
To create a DLL, you will first need a suitable compiler. Not all compilers that produce .NET code are guaranteed to work with Unity, so it may be wise to test the compiler with some available code before doing significant work with it. If the DLL contains no code that depends on the Unity API then you can simply compile it to a DLL using the appropriate compiler options. If you do want to use the Unity API then you will need to make Unity's own DLLs available to the compiler. On a Mac, these are contained in the application bundle (you can see the internal structure of the bundle by using the Show Package Contents command from the contextual menu; right click or ctrl-click the Unity application):-
The path to the Unity DLLs will typically be
/Applications/Unity/Unity.app/Contents/Frameworks/Managed/
...and the two DLLs are called UnityEngine.dll and UnityEditor.dll.
On Windows, the DLLs can be found in the folders that accompany the Unity application. The path will typically be
C:\Program Files (x86)\Unity\Editor\Data\Managed
...while the names of the DLLs are the same as for Mac OS.
The exact options for compiling the DLL will vary depending on the compiler used. As an example, the command line for the Mono C# compiler, mcs, might look like this on Mac OS:-
mcs -r:/Applications/Unity/Unity.app/Contents/Frameworks/Managed/UnityEngine.dll -target:library ClassesForDLL.cs
Here, the -r option specifies a path to a library to be included in the build, in this case the UnityEngine library. The -target option specifies which type of build is required; the word "library" is used to select a DLL build. Finally, the name of the source file to compile is ClassesForDLL.cs (it is assumed that this file is in the current working folder, but you could specify the file using a full path if necessary). Assuming all goes well, the resulting DLL file will appear shortly in the same folder as the source file.
Using the DLL
Once compiled, the DLL file can simply be dragged into the Unity project like any other asset. The DLL asset has a foldout triangle which can be used to reveal the separate classes inside the library. Classes that derive from MonoBehaviour can be dragged onto Game Objects like ordinary scripts. Non-MonoBehaviour classes can be used directly from other scripts in the usual way.

A folded-out DLL with the classes visible
Execution Order
Unity のスクリプティングには、スクリプトが実行される所定の順番で実行される多くのイベント関数があります。 以下に実行順を記載します。
最初のシーンロード
これらの関数は、シーンが始まると呼び出されます (シーン内の各オブジェクトに対して一度)。
- Awake: この関数は常に Start 関数の前およびプレハブのインスタンス化直後に呼び出されます。 (If a GameObject is in-active during start up Awake is not called until it is made active, or a function in any script attached to it is called.)
- OnEnable: (オブジェクトがアクティブな場合にのみ呼び出されます): この関数は、オブジェクトを有効化した直後に呼び出されます。This happens when a MonoBehaviour is instance is created, such as when a level is loaded or a GameObject with the script component is instantiated.
最初のフレームのアップデート前
- Start: スクリプトのインスタンスが有効になると、最初のフレームのアップデート前に Start が呼び出されます。
フレーム間
- OnApplicationPause: This is called at the end of the frame where the pause is detected, effectively between the normal frame updates. One extra frame will be issued after OnApplicationPause is called to allow the game to show graphics that indicate the paused state.
アップデート順
ゲーム ロジック、インタラクション、アニメーション、カメラ位置などを追跡している場合、各種イベントを使用できます。 共通のパターンは、Update()関数内のほとんどのタスクを実行することですが、その他の関数も使用できます。
- FixedUpdate: FixedUpdate()はUpdate()よりも頻繁に呼び出されることが多くあります。 フレーム レートが低くい場合には、フレームごとに数回呼び出すことができますが、フレーム レートが高い場合は、フレーム間で呼び出すことができません。 FixedUpdate()直後にすべての物理特性の計算とアップデートが発生します。 FixedUpdate()内で移動計算を適用する際、Time.deltaTimeで値を乗算する必要はありません。 これは、フレーム レートとして独立して、FixedUpdate()が信頼できるタイマーで呼び出されます。
- Update: Update()はフレームごとに一度呼び出されます。 これは、フレームのアップデートに対する主力関数です。
- LateUpdate: LateUpdate()は、Update()後にフレームごとに一度呼び出されます。 Update()で実行される計算は、LateUpdate()が始まると完了します。 LateUpdate()の一般的な使用は次の三人称カメラです。 Update()内でキャラクターを動かし、回転させる場合、LateUpdate()でカメラの移動と回転の計算をすべて実行できます。 これにより、キャラクターがカメラがその位置を追跡する前に完全に移動します。
レンダリング
- OnPreCull: カメラがシーンを間引く前に呼び出されます。 カリングにより、カメラにどのオブジェクトを表示するかが決定されます。 OnPreCull は、カリングが発生する直前に呼び出されます。
- OnBecameVisible/OnBecameInvisible: オブジェクトがカメラに対して表示または非表示になる際に呼び出されます。
- OnWillRenderObject: オブジェクトが表示されると、各カメラに対して 一度 呼び出されます。
- OnPreRender: カメラがシーンのレンダリングを開始する前に呼び出されます。
- OnRenderObject: すべてのシーン レンダリング終了後に呼び出されます。 GL クラスまたは Graphics.DrawMeshNow を使用して、この点にカスタムのジオメトリを描画できます。
- OnPostRender: カメラがシーンのレンダリングを終了した後に呼び出されます。
- OnRenderImage(Pro only): 画面レンダリングが完了し、画面画像の処理が可能になった後に呼び出されます。
- OnGUI: GUI イベントに応じて、フレームごとに複数回呼び出されます。 レイアウトおよびリペイント イベントが最初に処理され、その後にレイアウトおよびキーボード/マウスイベントが各入力イベントに対して処理されます。
- OnDrawGizmos 可視化のためにシーン ビュー内でのギズモの描画に使用されます。
コルーチン
法線コルーチン アップデートは、Update 関数が返した後で実行されます。 コルーチンは、所定の YieldInstruction が終了するまで、その実行 (生成) を中止できる関数です。 コルーチンには下記の用法があります。
- yield; コルーチンは、次のフレームですべての Update 関数が呼び出された後に続行します。
- yield WaitForSeconds(2); フレームに対してすべての Update 関数が呼び出された後、指定された時間遅延後に続行します。
- yield WaitForFixedUpdate(); すべてのスクリプトですべての FixedUpdate 呼び出し後に続行します。
- yield WWW WWW ダウンロード完了後に続行します。
- yield StartCoroutine(MyFunc); コルーチンを連鎖し、MyFunc コルーチンが最初に完了するのを待ちます。
オブジェクト破棄時
- OnDestroy: This function is called after all frame updates for the last frame of the object's existence (the object might be destroyed in response to Object.Destroy or at the closure of a scene).
終了時
これらの関数は、シーン内のすべてのアクティブなオブジェクトで呼び出されます。
- OnApplicationQuit: この関数は、アプリケーション終了前に、すべてのゲーム オブジェクトで呼び出されます。 エディタでは、ユーザーが再生モードを停止すると呼び出されます。 ウェブ プレイヤーでは、ウェブ ビューが閉じられると呼び出されます。
- OnDisable: この関数は、動作が無効になると呼び出されます。
そのため、コルーチンでは、これが所定のスクリプトに対する実行順になります。
- すべての Awake 呼び出し
- すべての Start 呼び出し
- while (可変デルタ時間に対するステッピング)
- すべての FixedUpdate 関数
- 物理特性シミュレーション
- OnEnter/Exit/Stay トリガー関数
- OnEnter/Exit/Stay 衝突関数
リジッドボディ補間は、transform.position および rotation を適用します。
- OnMouseDown/OnMouseUp などのイベント
- すべての Update 関数
- アニメーションが、トランスフォームに対して、進行、ブレンドおよび適用されます。
- すべての LateUpdate 関数
- レンダリング
ヒント
- コルーチンは、すべての Update 関数後に実行されます。
iphone-PracticalGuide
This guide is for developers new to mobile game development, who are probably feeling overwhelmed, and are either planning and prototyping a new mobile game or porting an existing project to run smoothly on a mobile device. It should also be useful as a reference for anyone making mobile games or browser games which target old PCs and netbooks.
Optimization is a broad topic, and how you do it depends a lot on your game, so this guide is best read as an introduction or reference rather than a step-by-step guide that guarantees a smooth product.
All mobile devices are not created equal
The information here assumes hardware around the level of the Apple A4 chipset, which is used on the original iPad, the iPhone 3GS, and the 3rd generation iPod Touch. On the Android side, that would mean an Android phone such as the Nexus One, or most phones that run Android 2.3 Gingerbread. On average, these devices were released in early 2010. Out of the app-hungry market, these devices are the older, slower portion. But they should be supported, because they represent a large portion of the market.
There are much slower, and much faster phones out there as well. The computational capability of mobile devices is increasing at an alarming rate. It's not unheard of for a new generation of a mobile GPU to be five times faster than its predecessor. That's fast, when compared to the PC industry.
For an overview of Apple mobile device tech specs, see the Hardware page.
If you want to develop for mobile devices which will be popular in the future, or exclusively for high end devices right now, you will be able to get away with doing more. See Future Mobile Devices.
The very low end, such as the iPhone 3G and the first and second generation iPod touches, are extremely limited and even more care must be taken to optimize for them. However, there is some question to whether consumers who have not upgraded their device will be buying apps. So unless you are making a free app, it might not be worthwhile to support the old hardware.
Make optimization a design consideration, not a final step
British computer scientist Michael A. Jackson is often quoted for his Rules of Program Optimization:
The First Rule of Program Optimization: Don't do it. The Second Rule of Program Optimization (for experts only!): Don't do it yet.
His rationale was that, considering how fast computers are, and how quickly their speed is increasing, there is a good chance that if you program something it will run fast enough. Besides that, if you try to optimize too heavily, you might over-complicate things, limit yourself, or create tons of bugs.
However, if you are developing mobile games, there is another consideration: The hardware that is on the market right now is very limited compared to the computers we are used to working with, so the risk of creating something that simply won't run on the device balances out the risk of over-complication that comes with optimizing from the start.
Throughout this guide we will try to point out situations where an optimization would help a lot, versus situations where it would be frivolous.
Optimization: Not just for programmers
Artists also need to know the limitations of the platform and the methods that are used to get around them, so they can make creative choices that will pay off, and don't have to redo work.
- More responsibility can fall on the artist if the game design calls for atmosphere and lighting to be drawn into textures instead of being baked.
- Whenever anything can be baked, artists can produce content for baking, instead of real-time rendering. This allows them to ignore technical limitations and work freely.
Design your game to make a smooth runtime fall into your lap
These two pages detail general trends in game performance, and will explain how you can best design your game to be optimized, or how you can intuitively figure out which things need to be optimized if you've already gone into production.
Profile early and often
Profiling is important because it helps you discern which optimizations will pay off with big performance increases and which ones are a waste of your time. Because of the way that rendering is handled on a separate chip (GPU), the time it takes to render a frame is not the time that the CPU takes plus the time that the time that the GPU takes, instead it is the longer of the two. That means that if the CPU is slowing things down, optimizing your shaders won't increase the frame rate at all, and if the GPU is slowing things down, optimizing physics and scripts won't help at all.
Often different parts of the game and different situations perform differently as well, so one part of the game might cause 100 millisecond frames entirely due to a script, and another part of the game might cause the same slowdown, but because of something that is being rendered. So, at very least, you need to know where all the bottlenecks are if you are going to optimize your game.
Unity Profiler (Pro only)
The main Profiler in Unity can be used when targeting iOS or Android. See the Profiler guide for basic instructions on how to use it.
Internal Profiler
The internal profiler spews out text every 30 frames. It can help you figure out which aspects of your game are slowing things down, be it physics, scripts, or rendering, but it doesn't go into much detail, for example, which script or which renderer is the culprit.
See the Internal Profiler page for more details on how it works and how to turn it on.
Profiler indicates most of time spent rendering
Profiler indicates most of time spent outside of rendering
Table of Contents
- Practical Guide to Optimization for Mobiles - Future & High End Devices
- Practical Guide to Optimization for Mobiles - Graphics Methods
- Practical Guide to Optimization for Mobiles - Scripting and Gameplay Methods
- Practical Guide to Optimization for Mobiles - Rendering Optimizations
- Practical Guide to Optimization for Mobiles - Optimizing Scripts
iphone-FutureDevices

The graphical power of next-generation mobile devices is approaching that of the current generation of consoles (Wii, Xbox 360, and PS3). What will the consumer smartphone market look like in two years? It's hard to say for sure, but considering how things have been going, the average smartphone on the market will have a chipset about as fast as NVIDIA's Tegra 3 (Asus Transformer Prime, Google Nexus 7"), or Apple's A5X (iPad 3), and high-end tablets will pack graphical performance to rival today's consoles and consumer laptops.
What can these new devices do?
- Bumpmaps everywhere
- Reflective water & simple image effects
- Realtime shadows (Unity 4.0 feature)
- HD video playback
- Faster script execution
To get a sense of what is already being done for this coming generation of phones & tablets, watch NVIDIA's promotional video for Tegra 3. Bladeslinger and Shadowgun are Unity titles.
Page last updated: 2012-11-07iphone-OptimizedGraphicsMethods
What are mobile devices capable of? How should you plan your game accordingly? If your game runs slow, and the profiler indicates that it's a rendering bottleneck, how do you know what to change, and how to make your game look good but still run fast? This page is dedicated to a general and non-technical exposition of the methods. If you are looking for the specifics, see the Rendering Optimizations page.
![]() What you can reasonably expect to run on current consumer mobiles:
|
![]() What you CANNOT reasonably expect to run on current consumer mobiles:
|
![]() Examples - How top-notch mobile games are madeShadowgunShadowgun is an impressive example of what can be done on current mobile hardware. But more specifically, it's a good example of what cannot be done, and how to get around the limitations. Especially because a small part of the game has been made publicly available in this blog post. Here's a basic rundown of things that Shadowgun does in order to keep performance up:
|
![]() Sky Castle DemoThis demo was designed to show what Unity is capable of on high-end Android devices.
|
Bottom line - What this means for your gameThe more you respect and understand the limitations of the mobile devices, the better your game will look, and the smoother it will perform. If you want to make a high-class game for mobile, you will benefit from understanding Unity's graphics pipeline and being able to write your own shaders. But if you want something to grab to use right away, ShadowGun's shaders, available here, are a good place to start. Don't Simulate It, Bake It !There is no question that games attempt to follow the laws of nature. The movement of every parabolic projectile and the color of every pixel of shiny chrome is derived by formulas first written to mimic observations of the real world. But a game is one part scientific simulation and one part painting. You can't compete in the mobile market with physically accurate rendering; the hardware simply isn't there yet, if you try to imitate the real world all the way, your game will end up limited, drab, and laggy. You have to pick up your polygons and your blend modes like they're paintbrushes. The baked bumpmaps shown in Shadowgun are great examples of this. There are specular highlights already in the texture - the human eye doesn't notice that they don't actually line up with the reflected light and view directions - they are simply high-contrast details on the texture, completely faked, yet they end up looking great. This is a common cheating technique which has been used in many successful games. Compare the visor in the first Halo screenshot ever released with the visor from this release screenshot. It appears that the armor protrusions from the top of the helmet are reflected in the visor, but the reflection is actually baked into the visor texture. In League of Legends, a spell effect appears to have a pixel-light attached to it, but it actually is a blended plane with a texture that was probably generated by taking a screenshot of a pixel light shining on the ground. What works well:
What does not work:
|
But how do I actually do it?
See our Rendering Optimizations page.
Page last updated: 2012-11-07iphone-OptimizedScriptingMethods
This section demonstrates ways that mobile developers write code and structure their games so that they run fast. The core idea here is that game design and optimization aren't really separate processes; decisions you make when you are designing your game can make it both fun and fast.
A historical example

You may remember old games where the player was only allowed one shot on the screen at a time, and reload speed was controlled by whether the bullet missed or not, instead of a timer. This technique is called object pooling, and it simplifies memory management, making programs run smoother.
The creators of space invaders only had a small amount of RAM, and they had to ensure that their program would never need to allocate more than was available. If they let the player fire once every second, and they offered a powerup that decreased the reload time to a half a second, they would have to ensure that there was enough memory space to allocate a lot of projectiles in the case where the player fires as fast as possible and all of the shots live for the longest possible time. That would probably pose a problem for them, so instead, they just allocated one projectile and left it at that. As soon as the projectile dies, it is simply deactivated, and repositioned and activated when it is fired again. But it always lives in the same space in memory and doesn't have to move around or be constantly deleted and recreated.
An optimization, or a gameplay gem?
This is hardly realistic, but it happens to be fun. Tension is released in a climactic moment when the alien invaders approach the ground, similar to a climax in film or literature. The invaders' close proximity gives the adept player near-instantaneous reload time, allowing them to miraculously defend earth by mashing the fire key in perfect time. Good game designs live in a bizarre space between the interactive narrative and the background technology that powers it all. It's hard to plan out awesome, fun, efficient stuff like this, because code logistics and user interaction are two wildly different and deeply finicky things, and using them together to synthesize something fresh and fun takes a lot of thought and experimentation.
You probably can't plan out every aspect of your game in terms of interaction and playing nice with mobile hardware simultaneously. It's more likely that these "gems" where the two meet in harmony will pop up as accidents while you're experimenting. But having a solid understanding of the way your code runs on the hardware you intend to deploy on will help. If you want to see the detailed technical explanation of why object pooling is better, and learn about memory allocation, see our Scripting Optimizations page.
Will X run fast on Mobiles?
Say you are beginning to work on a game, and you want to impress your players with lots of action and flashy stuff happening at once. How do you plan those things out? How do you know where the limits are, in game terms like how many coins, how many zombies, how many opponent cars, etc? It all depends on how you code your game.
Generally, if you write your game code the easy way, or the most general and versatile way, you will run into performance issues a lot sooner. The more you rely on specific structures and tricks to run your game, the more horizons will expand, and you will be able to cram more stuff on screen.
Easy and versatile, but slow
- Rigidbodies limited to 2 dimensions in a 2D game.
- Rigidbodies on projectiles.
- Using Instantiate and Destroy a lot.
- Lots of individual 3D objects for collectables or characters.
- Performing calculations every frame.
- Using OnGUI for your GUI or HUD.
Complicated and limited, but faster
- Writing your own physics code for a 2D game.
- Handling collision detection for projectiles yourself.
- Using Object Pooling instead of Instantiate and Destroy.
- Using animated sprites on particles to represent simple objects.
- Performing expensive calculations every few frames and caching the results.
- A custom GUI solution.
Examples

Hundreds of rotating, dynamically lit, collectable coins onscreen at once
- NO: Each coin is a separate object with a rigidbody and a script that rotates it and allows it to be picked up.
- YES: The coins are a particle system with an animated texture, one script does the collision testing for all the coins and sets their color according to distance from a light.
- This example is implemented in the Scripting Optimization page.
Your custom-built soft-body simulation
- NO: The world has pillows lying around everywhere, which you can throw around and make piles of.
- YES: Your character is a pillow, there is only one of them, and the situations it will be in are somewhat predictable (It only collides with spheres and axis-aligned cubes). You can probably code something which isn't a full-featured softbody simulation, but looks really impressive and runs fast.
30 enemy characters shooting at the player at once
- NO: Each enemy has his own skinned mesh, a separate object for his weapon, and instantiates a rigidbody-based projectile every time he fires. Each enemy takes the state of all of his compatriots into account in a complicated AI script that runs every frame.
- YES: Most of the enemies are far away, and are represented by single sprites, or, the enemies are 2D and are just a couple sprites anyway. Every enemy bullet is drawn by the same particle system and simulated by a script which does only rudimentary physics. Each enemy updates his AI state twice per second according to the state of the other enemies in his sector.
The how and why of script optimization
See our page on Optimizing Scripts.
Page last updated: 2012-11-07iphone-PracticalRenderingOptimizations
This section introduces the technicalities of rendering optimization. It shows how to bake lighting results for better performance, and how the developers of Shadowgun levered high-contrast textures, with lighting baked-in, to make their game look great. If you are looking for general information on what a mobile-optimized game looks like, check out the Graphics Methods page.
Get Artsy!
Sometimes optimizing the rendering in your game requires some dirty work. All of the structure that Unity provides makes it easy to get something working fast, but if you require top notch fidelity on limited hardware, doing things yourself and sidestepping that structure is the way to go, provided that you can introduce a key structural change that makes things a lot faster. Your tools of choice are editor scripts, simple shaders, and good old-fashioned art production.
Note for Unity Indie users: The editor scripts referenced here use RenderTextures to make production smooth, so they wont work for you right away, but the principles behind them work with screenshotting as well, so nothing is stopping you from using these techniques for a few texture bakes of your own.
How to Dive Under the Hood
First of all, check out this introduction to how shaders are written.
- Built in shaders
- Examine the source code of the built in shaders. Often, if you want to make a new shader that does something different, you can achieve it by taking parts of two already-existing shaders and putting them together.
- Surface Shader Debugging (#pragma debug)
- A CG Shader is generated from every surface shader, and then fully compiled from there. If you add #pragma debug to the top of your surface shader, when you open the compiled shader via the inspector, you can see the intermediate CG code. This is useful for inspecting how a specific part of a shader is actually calculated, and it can also be useful for grabbing certain aspects you want from a surface shader and applying them to a CG shader.
- Shader Include Files
- A lot of shader helper code is included in every shader, and usually it isn't used, but this is why you will sometimes see shaders calling functions like WorldReflectionVector which don't seem to be defined anywhere. Unity has several built-in shader include files that contain these helper definitions. To find a specific function, you will need to search through all of the different includes.
- These files are a major part of internal structure that Unity uses to make it easy to write shaders; the files provide things like real time shadows, different light types, lightmaps, and multiple platform support.
- Hardware documentation
- Take your time to study Apple documentations on
hardware and best practices for writing shaders. Note that we would suggest to be more aggressive with floating point precision hints however.
Shadowgun in-depth
Shadowgun is a great graphical achievement considering the hardware it runs on. While the art quality seems to be the key to the puzzle, there are a couple tricks to achieving such quality that programmers can pull off to maximize their artists' potential.
In the Graphics Methods page we used the golden statue in Shadowgun as an example of a great optimization; instead of using a normal map to give their statue some solid definition, they just baked lighting detail into the texture. Here, we will show you how and why you should use a similar technique in your own game.
+ Show [Shader code for Real-Time vs Baked Golden Statue] +|
Reflective Bumped Specular ![]() |
Baked Light with Reflection ![]() |
Render to Texel
The real-time light is definitely higher quality, but the performance gain from the baked version is massive. So how was this done? An editor tool called Render to Texel was created for this purpose. It bakes the light into the texture through the following process:
- Transform the tangent space normal map to world space via script.
- Create a world space position map via script.
- Render to Texture a fullscreen pass of a the entire texture using the two previous maps, with one additional pass per light.
- Average results from several different vantage points to yield something which looks plausible all around, or at least from common viewing angles in your game.
This is how the best graphics optimizations work. They sidestep tons of calculations by preforming them in the editor or before the game runs. In general, this is what you want to do:
- Create something that looks great, don't worry about performance.
- Use tools like Unity's lightmapper and editor extensions like Render to Texel and Sprite Packer to bake it down to something which is very simple to render.
- Making your own tools is the best way to do this, you can create the perfect tool suited for whatever problem your game presents.
- Create shaders and scripts which modulate your baked output to give it some sort of "shine"; an eye-catching effect to create an illusion of dynamic light.
Concept of Light Frequency

Just like the Bass and Treble of an audio track, images also have high-frequency and low-frequency components, and when you're rendering, it's best to handle them in different ways, similar to how stereos use subwoofers and tweeters to produce a full body of sound. One way to visualize the different frequencies of an image is to use the "High Pass" filter in Photoshop. . If you have done audio work before, you will recognize the name High Pass. Essentially what it does is cut off all frequencies lower than X, the parameter you pass to the filter. For images, Gaussian Blur is the equivalent of a Low Pass.
This applies to realtime graphics because frequency is a good way to separate things out and determine how to handle what. For example, in a basic lightmapped environment, the final image is obtained by composite of the lightmap, which is low frequency, and the textures, which are high-frequency. In Shadowgun, low frequency light is applied to characters quickly with light probes, high frequency light is faked through the use of a simple bumpmapped shader with an arbitrary light direction.
In general, by using different methods to render different frequencies of light, for example, baked vs dynamic, per-object vs per-level, per pixel vs per-vertex, etc, you can create full bodied images on limited hardware. Stylistic choices aside, it's generally a good idea to try to have strong variation colors or values at both high and low frequencies.
Frequency in Practice: Shadowgun Decomposition

- Top Row
- Ultra-Low-Frequency Specular Vertex Light (Dynamic) | High Frequency Alpha Channel | Low Frequency Lightmap | High Frequency Albedo
- Mid Row
- Specular Vertex Light * Alpha | High Frequency Additive Details | Lightmap * Color Channel
- Bottom
- Final Sum
Note: Usually these decompositions refer to steps in a deferred renderer, but that's not the case here. All of this is done in just one pass. These are the two relevant shaders which this composition was based on:
+ Show [Lightmapped with Virtual Gloss Per-Vertex Additive] + + Show [Lightprobes with Virtual Gloss Per-Vertex Additive] +Best Practices
GPU optimization: Alpha-Testing
Some GPUs, particularly ones found in mobile devices, incur a high performance overhead for alpha-testing (or use of the discard and clip operations in pixel shaders). You should replace alpha-test shaders with alpha-blended ones if possible. Where alpha-testing cannot be avoided, you should keep the overall number of visible alpha-tested pixels to a minimum.
iOS Texture Compression
Some images, especially if using iOS/Android PVR texture compression, are prone to visual artifacts in the alpha channel. In such cases, you might need to tweak the PVRT compression parameters directly in your imaging software. You can do that by installing the PVR export plugin or using PVRTexTool from Imagination Tech, the creators of the PVRTC format. The resulting compressed image file with a .pvr extension will be imported by the Unity editor directly and the specified compression parameters will be preserved. If PVRT-compressed textures do not give good enough visual quality or you need especially crisp imaging (as you might for GUI textures) then you should consider using 16-bit textures instead of 32-bit. By doing so, you will reduce the memory bandwidth and storage requirements by half.
Android Texture Compression
All Android devices with support for OpenGL ES 2.0 also support the ETC1 compression format; it's therefore encouraged to whenever possible use ETC1 as the prefered texture format.
If targeting a specific graphics architecture, such as the Nvidia Tegra or Qualcomm Snapdragon, it may be worth considering using the proprietary compression formats available on those architectures. The Android Market also allows filtering based on supported texture compression format, meaning a distribution archive (.apk) with for example DXT compressed textures can be prevented for download on a device which doesn't support it.
An Exercise
- Download Render to Texel.
- Bake lighting on your model.
- Run the High Pass filter on the result in Photoshop.
- Edit the "Mobile/Cubemapped" shader, included in the Render to Texel package, so that the missing low-frequency light details are replaced by vertex light.
iphone-PracticalScriptingOptimizations
This section demonstrates how you would go about optimizing the actual scripts and methods your game uses, and it also goes into detail about the reasons why the optimizations work, and why applying them will benefit you in certain situations.
Profiler is King (Unity Pro)
There is no such thing as a list of boxes to check that will ensure your project runs smoothly. To optimize a slow project, you have to profile to find specific offenders that take up a disproportionate amount of time. Trying to optimize without profiling or without thoroughly understanding the results that the profiler gives is like trying to optimize with a blindfold on.
So, if you want to make a technologically demanding game that runs on mobile platforms, you probably need Unity Pro for the Profiler.
What About Indie?
You can use the internal profiler to figure out what kind of process is slowing your game down, be it physics, scripts, or rendering, but you can't drill down into specific scripts and methods to find the actual offenders. However, by building switches into your game which enable and disable certain functionality, you can narrow down the worst offenders significantly. For example, if you remove the enemy characters' AI script and the framerate doubles, you know that the script, or something that it brings into the game, has to be optimized. The only problem is that you may have to try a lot of different things before you find the problem.
For more about profiling on mobile devices, see the profiling section.
Optimized by Design
Attempting to develop something which is fast from the beginning is risky, because there is a trade-off between wasting time making things that would be just as fast if they weren't optimized and making things which will have to be cut or replaced later because they are too slow. It takes intuition and knowledge of the hardware to make good decisions in this regard, especially because every game is different and what might be a crucial optimization for one game may be a flop in another.
Object Pooling
We gave object pooling as an example of the intersection between good gameplay and good code design in our introduction to optimized scripting methods. Using object pooling for ephemeral objects is faster than creating and destroying them, because it makes memory allocation simpler and removes dynamic memory allocation overhead and Garbage Collection, or GC.
Memory Allocation
+ Show [Simple Explanation of what Automatic Memory Management is] +- Read more about Automatic Memory Management and the Garbage Collector.
How to Avoid Allocating Memory
Every time an object is created, memory is allocated. Very often in code, you are creating objects without even knowing it.
- Debug.Log("boo" + "hoo"); creates an object.
- Use System.String.Empty instead of "" when dealing with lots of strings.
- Immediate Mode GUI (UnityGUI) is slow and should not be used at any time when performance is an issue.
- Difference between class and struct:
- Objects which stick around for a long time should be classes, and objects which are ephemeral should be structs. Vector3 is probably the most famous struct. If it were a class, everything would be a lot slower.
Why Object Pooling is Faster
The upshot of this is that using Instantiate and Destroy a lot gives the Garbage Collector a lot to do, and this can cause a "hitch" in gameplay. As the Automatic Memory Management page explains, there are other ways to get around the common performance hitches that surround Instantiate and Destroy, such as triggering the Garbage Collector manually when nothing is going on, or triggering it very often so that a large backlog of unused memory never builds up.
Another reason is that, when a specific prefab is instantiated for the first time, sometimes additional things have to be loaded into RAM, or textures and meshes need to be uploaded to the GPU. This can cause a hitch as well, and with object pooling, this happens when the level loads instead of during gameplay.
Imagine a puppeteer who has an infinite box of puppets, where every time the script calls for a character to appear, he gets a new copy of its puppet out of the box, and every time the character exits the stage, he tosses the current copy. Object pooling is the equivalent of getting all the puppets out of the box before the show starts, and leaving them on the table behind the stage whenever they are not supposed to be visible.
Why Object Pooling can be Slower
One issue is that the creation of a pool reduces the amount of heap memory available for other purposes; so if you keep allocating memory on top of the pools you just created, you might trigger garbage collection even more often. Not only that, every collection will be slower, because the time taken for a collection increases with the number of live objects. With these issues in mind, it should be apparent that performance will suffer if you allocate pools that are too large or keep them active when the objects they contain will not be needed for some time. Furthermore, many types of objects don't lend themselves well to object pooling. For example, the game may include spell effects that persist for a considerable time or enemies that appear in large numbers but which are only killed gradually as the game progresses. In such cases, the performance overhead of an object pool greatly outweighs the benefits and so it should not be used.
Implementation
Here's a simple side by side comparison of a script for a simple projectile, one using Instantiation, and one using Object Pooling.
+ Show [Object Pooling Example] +Of course, for a large, complicated game, you will want to make a generic solution that works for all your prefabs.
Another Example: Coin Party!
The example of "Hundreds of rotating, dynamically lit, collectable coins onscreen at once" which was given in the Scripting Methods section will be used to demonstrate how script code, Unity components like the Particle System, and custom shaders can be used to create a stunning effect without taxing the weak mobile hardware.
Imagine that this effect lives in the context of a 2D sidescrolling game with tons of coins that fall, bounce, and rotate. The coins are dynamically lit by point lights. We want to capture the light glinting off the coins to make our game more impressive.
If we had powerful hardware, we could use a standard approach to this problem. Make every coin an object, shade the object with either vertex-lit, forward, or deferred lighting, and then add glow on top as an image effect to get the brightly reflecting coins to bleed light onto the surrounding area.
But mobile hardware would choke on that many objects, and a glow effect is totally out of the question. So what do we do?
Animated Sprite Particle System

If you want to display a lot of objects which all move in a similar way and can never be carefully inspected by the player, you might be able to render large amounts of them in no time using a particle system. Here are a few stereotypical applications of this technique:
- Collectables or Coins
- Flying Debris
- Hordes or Flocks of Simple Enemies
- Cheering Crowds
- Hundreds of Projectiles or Explosions
There is a free editor extension called Sprite Packer that facilitates the creation of animated sprite particle systems. It renders frames of your object to a texture, which can then be used as an animated sprite sheet on a particle system. For our use case, we would use it on our rotating coin.
Reference Implementation

Included in the Sprite Packer project is an example that demonstrates a solution to this exact problem.
It uses a family of assets of all different kinds to achieve a dazzling effect on a low computing budget:
- A control script
- Specialized textures created from the output of the SpritePacker
- A specialized shader which is intimately connected with both the control script and the texture.
A readme file is included with the example which attempts to explain why and how the system works, outlining the process that was used to determine what features were needed and how they were implemented. This is that file:
+ Show [Coin Party README] +The end goal of this example or "moral of the story" is that if there is something which your game really needs, and it causes lag when you try to achieve it through conventional means, that doesn't mean that it is impossible, it just means that you have to put in some work on a system of your own that runs much faster.
Techniques for Managing Thousands of Objects
These are specific scripting optimizations which are applicable in situations where hundreds or thousands of dynamic objects are involved. Applying these techniques to every script in your game is a terrible idea; they should be reserved as tools and design guidelines for large scripts which handle tons of objects or data at run time.
- Avoid or minimize O(n2) operations on large data sets
- Cache references instead of performing unnecessary searches
- Minimize expensive math functions
- Only execute expensive operations occasionally, e.g. Physics.Raycast()
- Minimize callstack overhead in inner loops
Optimizing Physics Performance
The NVIDIA PhysX physics engine used by Unity is available on mobiles, but the performance limits of the hardware will be reached more easily on mobile platforms than desktops.
Here are some tips for tuning physics to get better performance on mobiles:-
- You can adjust the Fixed Timestep setting (in the Time manager) to reduce the time spent on physics updates. Increasing the timestep will reduce the CPU overhead at the expense of the accuracy of the physics. Often, lower accuracy is an acceptable tradeoff for increased speed.
- Set the Maximum Allowed Timestep in the Time manager in the 8-10fps range to cap the time spent on physics in the worst case scenario.
- Mesh colliders have a much higher performance overhead than primitive colliders, so use them sparingly. It is often possible to approximate the shape of a mesh by using child objects with primitive colliders. The child colliders will be controlled collectively as a single compound collider by the rigidbody on the parent.
- While wheel colliders are not strictly colliders in the sense of solid objects, they nonetheless have a high CPU overhead.
Optimizing Graphics Performance
Good performance is critical to the success of many games. Below are some simple guidelines for maximizing the speed of your game's graphical rendering.
Where are the graphics costs
The graphical parts of your game can primarily cost on two systems of the computer: the GPU or the CPU. The first rule of any optimization is to find where the performance problem is; because strategies for optimizing for GPU vs. CPU are quite different (and can even be opposite - it's quite common to make GPU do more work while optimizing for CPU, and vice versa).
Typical bottlenecks and ways to check for them:
- GPU is often limited by fillrate or memory bandwidth.
- Does running the game at lower display resolution make it faster? If so, you're most likely limited by fillrate on the GPU.
- CPU is often limited by the number of things that need to be rendered, also known as "draw calls".
- Check "draw calls" in Rendering Statistics window; if it's more than several thousand (for PCs) or several hundred (for mobile), then you might want to optimize the object count.
Of course, these are only the rules of thumb; the bottleneck could as well be somewhere else. Less typical bottlenecks:
- Rendering is not a problem, neither on the GPU nor the CPU! For example, your scripts or physics might be the actual problem. Use Profiler to figure this out.
- GPU has too many vertices to process. How many vertices are "ok" depends on the GPU and the complexity of vertex shaders. Typical figures are "not more than 100 thousand" on mobile, and "not more than several million" on PC.
- CPU has too many vertices to process, for things that do vertex processing on the CPU. This could be skinned meshes, cloth simulation, particles etc.
CPU optimization - draw call count
In order to render any object on the screen, the CPU has some work to do - things like figuring out which lights affect that object, setting up the shader & shader parameters, sending drawing commands to the graphics driver, which then prepares the commands to be sent off to the graphics card. All this "per object" CPU cost is not very cheap, so if you have lots of visible objects, it can add up.
So for example, if you have a thousand triangles, it will be much, much cheaper if they are all in one mesh, instead of having a thousand individual meshes one triangle each. The cost of both scenarios on the GPU will be very similar, but the work done by the CPU to render a thousand objects (instead of one) will be significant.
In order to make CPU do less work, it's good to reduce the visible object count:
- Combine close objects together, either manually or using Unity's draw call batching.
- Use less materials in your objects, by putting separate textures into a larger texture atlas and so on.
- Use less things that cause objects to be rendered multiple times (reflections, shadows, per-pixel lights etc., see below).
Combine objects together so that each mesh has at least several hundred triangles and uses only one Material for the entire mesh. It is important to understand that combining two objects which don't share a material does not give you any performance increase at all. The most common reason for having multiple materials is that two meshes don't share the same textures, so to optimize CPU performance, you should ensure that any objects you combine share the same textures.
However, when using many pixel lights in the Forward rendering path, there are situations where combining objects may not make sense, as explained below.
GPU: Optimizing Model Geometry
When optimizing the geometry of a model, there are two basic rules:
- Don't use any more triangles than necessary
- Try to keep the number of UV mapping seams and hard edges (doubled-up vertices) as low as possible
Note that the actual number of vertices that graphics hardware has to process is usually not the same as the number reported by a 3D application. Modeling applications usually display the geometric vertex count, i.e. the number of distinct corner points that make up a model. For a graphics card, however, some geometric vertices will need to be split into two or more logical vertices for rendering purposes. A vertex must be split if it has multiple normals, UV coordinates or vertex colors. Consequently, the vertex count in Unity is invariably higher than the count given by the 3D application.
While the amount of geometry in the models is mostly relevant for the GPU, some features in Unity also process models on the CPU, for example mesh skinning.
Lighting Performance
Lighting which is not computed at all is always the fastest! Use Lightmapping to "bake" static lighting just once, instead of computing it each frame. The process of generating a lightmapped environment takes only a little longer than just placing a light in the scene in Unity, but:
- It is going to run a lot faster (2-3 times for 2 per-pixel lights)
- And it will look a lot better since you can bake global illumination and the lightmapper can smooth the results
In a lot of cases there can be simple tricks possible in shaders and content, instead of adding more lights all over the place. For example, instead of adding a light that shines straight into the camera to get "rim lighting" effect, consider adding a dedicated "rim lighting" computation into your shaders directly.
Lights in forward rendering
Per-pixel dynamic lighting will add significant rendering overhead to every affected pixel and can lead to objects being rendered in multiple passes. On less powerful devices, like mobile or low-end PC GPUs, avoid having more than one Pixel Light illuminating any single object, and use lightmaps to light static objects instead of having their lighting calculated every frame. Per-vertex dynamic lighting can add significant cost to vertex transformations. Try to avoid situations where multiple lights illuminate any given object.
If you use pixel lighting then each mesh has to be rendered as many times as there are pixel lights illuminating it. If you combine two meshes that are very far apart, it will increase the effective size of the combined object. All pixel lights that illuminate any part of this combined object will be taken into account during rendering, so the number of rendering passes that need to be made could be increased. Generally, the number of passes that must be made to render the combined object is the sum of the number of passes for each of the separate objects, and so nothing is gained by combining. For this reason, you should not combine meshes that are far enough apart to be affected by different sets of pixel lights.
During rendering, Unity finds all lights surrounding a mesh and calculates which of those lights affect it most. The Quality Settings are used to modify how many of the lights end up as pixel lights and how many as vertex lights. Each light calculates its importance based on how far away it is from the mesh and how intense its illumination is. Furthermore, some lights are more important than others purely from the game context. For this reason, every light has a Render Mode setting which can be set to Important or Not Important; lights marked as Not Important will typically have a lower rendering overhead.
As an example, consider a driving game where the player's car is driving in the dark with headlights switched on. The headlights are likely to be the most visually significant light sources in the game, so their Render Mode would probably be set to Important. On the other hand, there may be other lights in the game that are less important (other cars' rear lights, say) and which don't improve the visual effect much by being pixel lights. The Render Mode for such lights can safely be set to Not Important so as to avoid wasting rendering capacity in places where it will give little benefit.
Optimizing per-pixel lighting saves both CPU and the GPU: the CPU has less draw calls to do, and the GPU has less vertices to process and pixels to rasterize for all these additional object renders.
GPU: Texture Compression and Mipmaps
Using Compressed Textures will decrease the size of your textures (resulting in faster load times and smaller memory footprint) and can also dramatically increase rendering performance. Compressed textures use only a fraction of the memory bandwidth needed for uncompressed 32bit RGBA textures.
Use Texture Mip Maps
As a rule of thumb, always have Generate Mip Maps enabled for textures used in a 3D scene. In the same way Texture Compression can help limit the amount of texture data transfered when the GPU is rendering, a mip mapped texture will enable the GPU to use a lower-resolution texture for smaller triangles.
The only exception to this rule is when a texel (texture pixel) is known to map 1:1 to the rendered screen pixel, as with UI elements or in a 2D game.
LOD and Per-Layer Cull Distances
In some games, it may be appropriate to cull small objects more aggressively than large ones, in order to reduce both the CPU and GPU load. For example, small rocks and debris could be made invisible at long distances while large buildings would still be visible.
This can be either achieved by Level Of Detail system, or by setting manual per-layer culling distances on the camera. You could put small objects into a separate layer and setup per-layer cull distances using the Camera.layerCullDistances script function.
Realtime Shadows
Realtime shadows are nice, but they can cost quite a lot of performance, both in terms of extra draw calls for the CPU, and extra processing on the GPU. For further details, see the Shadows page.
GPU: Tips for writing high-performance shaders
A high-end PC GPU and a low-end mobile GPU can be literally hundreds of times performance difference apart. Same is true even on a single platform. On a PC, a fast GPU is dozens of times faster than a slow integrated GPU; and on mobile platforms you can see just as large difference in GPUs.
So keep in mind that GPU performance on mobile platforms and low-end PCs will be much lower than on your development machines. Typically, shaders will need to be hand optimized to reduce calculations and texture reads in order to get good performance. For example, some built-in Unity shaders have their "mobile" equivalents that are much faster (but have some limitations or approximations - that's what makes them faster).
Below are some guidelines that are most important for mobile and low-end PC graphics cards:
Complex mathematical operations
Transcendental mathematical functions (such as pow, exp, log, cos, sin, tan, etc) are quite expensive, so a good rule of thumb is to have no more than one such operation per pixel. Consider using lookup textures as an alternative where applicable.
It is not advisable to attempt to write your own normalize, dot, inversesqrt operations, however. If you use the built-in ones then the driver will generate much better code for you.
Keep in mind that alpha test (discard) operation will make your fragments slower.
Floating point operations
You should always specify the precision of floating point variables when writing custom shaders. It is critical to pick the smallest possible floating point format in order to get the best performance. Precision of operations is completely ignored on many desktop GPUs, but is critical for performance on many mobile GPUs.
If the shader is written in Cg/HLSL then precision is specified as follows:
- float - full 32-bit floating point format, suitable for vertex transformations but has the slowest performance.
- half - reduced 16-bit floating point format, suitable for texture UV coordinates and roughly twice as fast as highp.
- fixed - 10-bit fixed point format, suitable for colors, lighting calculation and other high-performance operations and roughly four times faster than highp.
If the shader is written in GLSL ES then the floating point precision is specified specified as highp, mediump, lowp respectively.
For further details about shader performance, please read the Shader Performance page.
Simple Checklist to make Your Game Faster
- Keep vertex count below 200K..3M per frame when targetting PCs, depending on the target GPU
- If you're using built-in shaders, pick ones from Mobile or Unlit category. They work on non-mobile platforms as well; but are simplified and approximated versions of the more complex shaders.
- Keep the number of different materials per scene low - share as many materials between different objects as possible.
- Set Static property on a non-moving objects to allow internal optimizations like static batching.
- Do not use Pixel Lights when it is not necessary - choose to have only a single (preferably directional) pixel light affecting your geometry.
- Do not use dynamic lights when it is not necessary - choose to bake lighting instead.
- Use compressed texture formats when possible, otherwise prefer 16bit textures over 32bit.
- Do not use fog when it is not necessary.
- Learn benefits of Occlusion Culling and use it to reduce amount of visible geometry and draw-calls in case of complex static scenes with lots of occlusion. Plan your levels to benefit from ccclusion culling.
- Use skyboxes to "fake" distant geometry.
- Use pixel shaders or texture combiners to mix several textures instead of a multi-pass approach.
- If writing custom shaders, always use smallest possible floating point format:
- fixed / lowp - for colors, lighting information and normals,
- half / mediump - for texture UV coordinates,
- float / highp - avoid in pixel shaders, fine to use in vertex shader for position calculations.
- Minimize use of complex mathematical operations such as pow, sin, cos etc. in pixel shaders.
- Choose to use less textures per fragment.
See Also
Page last updated: 2012-07-30Draw Call Batching
To draw an object on the screen, the engine has to issue a draw call to the graphics API (e.g. OpenGL or Direct3D). Every single draw call requires a significant amount of work on the part of the graphics API, causing significant performance overhead on the CPU side.
Unity combines a number of objects at runtime and draws them together with a single draw call. This operation is called "batching". The more objects Unity can batch together, the better rendering performance you will get.
Built-in batching support in Unity has significant benefit over simply combining geometry in the modeling tool (or using the CombineChildren script from the Standard Assets package). Batching in Unity happens after visibility determination step. The engine does culling on each object individually, and the amount of rendered geometry is going to be the same as without batching. Combining geometry in the modeling tool, on the other hand, prevents effecient culling and results in much higher amount of geometry being rendered.
Materials
Only objects sharing the same material can be batched together. Therefore, if you want to achieve good batching, you need to share as many materials among different objects as possible.
If you have two identical materials which differ only in textures, you can combine those textures into a single big texture - a process often called texture atlasing. Once textures are in the same atlas, you can use single material instead.
If you need to access shared material properties from the scripts, then it is important to note that modifying Renderer.material will create a copy of the material. Instead, you should use Renderer.sharedMaterial to keep material shared.
Dynamic Batching
Unity can automatically batch moving objects into the same draw call if they share the same material.
Dynamic batching is done automatically and does not require any additional effort on your side.
- Batching dynamic objects has certain overhead per vertex, so batching is applied only to meshes containing less than 900 vertex attributes in total.
- If your shader is using Vertex Position, Normal and single UV, then you can batch up to 300 verts and if your shader is using Vertex Position, Normal, UV0, UV1 and Tangent, then only 180 verts.
- Please note: attribute count limit might be changed in future
- Don't use scale. Objects with scale (1,1,1) and (2,2,2) won't batch.
- Uniformly scaled objects won't be batched with non-uniformly scaled ones.
- Objects with scale (1,1,1) and (1,2,1) won't be batched. On the other hand (1,2,1) and (1,3,1) will be.
- Using different material instances will cause batching to fail.
- Objects with lightmaps have additional (hidden) material parameter: offset/scale in lightmap, so lightmapped objects won't be batched (unless they point to same portions of lightmap)
- Multi-pass shaders will break batching. E.g. Almost all unity shaders supports several lights in forward rendering, effectively doing additional pass for them
- Using instances of a prefab automatically are using the same mesh and material.
Static Batching
Static batching, on the other hand, allows the engine to reduce draw calls for geometry of any size (provided it does not move and shares the same material). Static batching is significantly more efficient than dynamic batching. You should choose static batching as it will require less CPU power.
In order to take advantage of static batching, you need explicitly specify that certain objects are static and will not move, rotate or scale in the game. To do so, you can mark objects as static using the Static checkbox in the Inspector:

Using static batching will require additional memory for storing the combined geometry. If several objects shared the same geometry before static batching, then a copy of geometry will be created for each object, either in the Editor or at runtime. This might not always be a good idea - sometimes you will have to sacrifice rendering performance by avoiding static batching for some objects to keep a smaller memory footprint. For example, marking trees as static in a dense forest level can have serious memory impact.
Static batching is only available in Unity Pro for each platform.
Page last updated: 2012-10-22Modeling Optimized Characters
Below are some tips for designing character models to give optimal rendering speed.
Use a Single Skinned Mesh Renderer
You should use only a single skinned mesh renderer for each character. Unity optimizes animation using visibility culling and bounding volume updates and these optimizations are only activated if you use one animation component and one skinned mesh renderer in conjunction. The rendering time for a model could roughly double as a result of using two skinned meshes in place of a single mesh and there is seldom any practical advantage in using multiple meshes.
Use as Few Materials as Possible
You should also keep the number of materials on each mesh as low as possible. The only reason why you might want to have more than one material on a character is that you need to use different shaders for different parts (eg, a special shader for the eyes). However, two or three materials per character should be sufficient in almost all cases.
Use as Few Bones as Possible
A bone hierarchy in a typical desktop game uses somewhere between fifteen and sixty bones. The fewer bones you use, the better the performance will be. You can achieve very good quality on desktop platforms and fairly good quality on mobile platforms with about thirty bones. Ideally, keep the number below thirty for mobile devices and don't go too far above thirty for desktop games.
Polygon Count
The number of polygons you should use depends on the quality you require and the platform you are targeting. For mobile devices, somewhere between 300 and 1500 polygons per mesh will give good results, whereas for desktop platforms the ideal range is about 1500 to 4000. You may need to reduce the polygon count per mesh if the game can have lots of characters onscreen at any given time. As an example, Half Life 2 used 2500-5000 triangles per character. Current AAA games running on the PS3 or Xbox 360 usually have characters with 5000-7000 triangles.
Keep Forward and Inverse Kinematics Separate
When animations are imported, a model's inverse kinematic (IK) nodes are baked into forward kinematics (FK) and as a result, Unity doesn't need the IK nodes at all. However, if they are left in the model then they will have a CPU overhead even though they don't affect the animation. You can delete the redundant IK nodes in Unity or in the modeling tool, according to your preference. Ideally, you should keep separate IK and FK hierarchies during modeling to make it easier to remove the IK nodes when necessary.
Page last updated: 2011-11-04RenderingStatistics
サブシェーダは、タグを使用して、レンダリング エンジンに対して、タグをいつ、どのようにレンダリングするかを指示します。
構文
- Tags { "TagName1" = "Value1" "TagName2" = "Value2" }
- TagName1 to have Value1, TagName2 to have Value2を指定します。 タグは好きな数だけ使用できます。
詳細
タグは基本的にキー値のペアです。 SubShader タグ内では、タグを使用して、レンダリング順やサブシェーダのその他のパラメータを決定します。Note that the following tags recognized by Unity must be inside SubShader section and not inside Pass!
レンダリング順 - Queue タグ
キュータグを使用して、オブジェクトが描画される順番を決定できます。 シェーダは、そのオブジェクトが所属するレンダー キューを決定し、このようにして、透明シェーダは、すべての不透明オブジェクトの後でそのオブジェクトが描かれるようにします。
事前定義された 4 つのレンダー キューがありますが、事前定義されたレンダー キューの間に更にキューを置くこともできます。 以下の事前定義されたレンダー キューがあります。
- Background - このレンダー キューは他のどれよりも前にレンダリングされます。 これは、スカイボックスなどに使用されます。
- Geometry (default) - これは、ほとんどのオブジェクトに使用されます。 不透明のジオメトリはこのキューを使用します。
- AlphaTest - alpha tested geometry uses this queue. It's a separate queue from Geometry one since it's more efficient to render alpha-tested objects after all solid ones are drawn.
- Transparent - このレンダー キューは、Geometry と AlphaTest 後に、後前の順でレンダリングされます。 アルファ ブレンドされたもの (つまり、深さバッファに書きこまないシェーダ) は、ここに行く必要があります (草、粒子効果)。
- Overlay - このレンダー キューは、オーバーレイ効果用です。 最後にレンダリングされたものはここに行きます (レンズ フレアなど)。
Shader "Transparent Queue Example" {
SubShader {
Tags { "Queue" = "Transparent" }
Pass {
// シェーダ本体残り。
}
}
}
透明キュー内のものをレンダリングする方法を示す例
Geometryレンダー キューは、最高のパフォーマンスを得るため、オブジェクトの描画順を最適化します。 その他のすべてのレンダー キューは、オブジェクトを距離別にソートし、最も遠いオブジェクトからレンダリングを開始し、最後に最も近いオブジェクトをレンダリングします。
特殊な仕様のために間にあるキューを使用できます。 内部では、各キューは整数インデックスで表されます。Backgroundは 1000、Geometryは 2000、 AlphaTest は 2450 Transparentは 3000、Overlayは 4000 になります。
Tags { "Queue" = "Geometry+1" }
これにより、レンダー キュー インデックスが 2001 (ジオメトリ + 1) になるため、すべての不透明なオブジェクト後で、透明のオブジェクトの前ににレンダリングされます。 これは、オブジェクトを常にその他のオブジェクト間でしたい場合に便利です。 例えば、ほとんどの場合、透明な水は、不透明オブジェクト後で、透明オブジェクト前に描画サれる必要があります。
RenderType tag
RenderType tag categorizes shaders into several predefined groups, e.g. is is an opaque shader, or an alpha-tested shader etc. This is used by Shader Replacement and in some cases used to produce camera's depth texture.
IgnoreProjector タグ
IgnoreProjectorタグが渡され、Trueの値がある場合、このシェーダを使用するオブジェクトは、Projectors の影響を受けません。 これはほとんどの場合、投影をオブジェクトに影響させるよい方法がないため、半透明のオブジェクトで便利です。
関連資料
パスにもタグを与えることができます。 Pass Tags 参照してください。
Page last updated: 2012-11-13Reducing File size
Unity はインポートされたすべてのアセットを後処理します。
Unity は常にインポートされたファイルを後処理するので、jpg ではなく、マルチレイヤー化 psd ファイルとしてファイルを格納すると、配備するプレイヤーのサイズの差が一切なくなります。 作業を楽にするため、作業している形式 (例:.mb ファイル、.psd ファイル .tiff ファイル) で保存します。
Unity は未使用のアセットを除去します。
プロジェクト フォルダのアセットの数は、ビルド プレイヤーのサイズには影響しません。 Unity は非常にスマートなため、ゲーム内で使用サれているアセットとそうでないアセットを検出できます。 Unity は、ゲームの作成前にアセットへのすべての参照に従い、ゲームに含める必要のあるアセットのリストを生成します。 従って、プロジェクト フォルダで未使用のアセットを安全に保存できます。
Unity は使用されているファイル サイズの概要を出力します。
Unity は、プレイヤーの作成終了後、ほとんどのファイル サイズを使用しているアセットのタイプを出力し、ビルド内に含まれていたアセットを出力します。これを確認するにはエディターのコンソールログをご覧ください: コンソールウィンドウの ボタン (または ).

スペースを消費するものの概要
テクスチャ サイズの最大化
多くの場合、テクスチャはビルド内のほとんどのスペースを消費します。 最初に、圧縮テクスチャ形式 (DXT(Desktop platforms) または PVRTC) を使用できる場所で使用します。
サイズが減らない場合は、テクスチャのサイズを減らしてみてください。 ここでのポイントは、実際のソースの内容を編集する必要がないということです。 プロジェクト ビューでテクスチャを選択し、インポート設定でMax Texture Sizeを設定します。 Scene View で見た目が悪くなり始めるまで、テクスチャを使用するオブジェクトを拡大し、Max Texture Sizeを調整するのはよい考えです。

Max Texture Size を変更しても、テクスチャ アセットには影響せず、ゲーム内の解像度のみに影響します
テクスチャはどのくらいのメモリを消費しますか?
デスクトップ!
| Compression | メモリの消費 |
| RGB Compressed DXT1 | 0.5 bpp (バイト/ピクセル) |
| RGBA Compressed DXT5 | 1 bpp |
| RGB 16bit | 2 bpp |
| RGB 24bit | 3 bpp |
| Alpha 8bit | 1 bpp |
| RGBA 16bit | 2 bpp |
| RGBA 32bit | 4 bpp |

iOS
| Compression | メモリの消費 |
| RGB Compressed PVRTC 2 bits | 0.25 bpp (バイト/ピクセル) |
| RGBA Compressed PVRTC 2 bits | 0.25 bpp |
| RGB Compressed PVRTC 4 bits | 0.5 bpp |
| RGBA Compressed PVRTC 4 bits | 0.5 bpp |
| RGB 16bit | 2 bpp |
| RGB 24bit | 3 bpp |
| Alpha 8bit | 1 bpp |
| RGBA 16bit | 2 bpp |
| RGBA 32bit | 4 bpp |

Android
| Compression | メモリの消費 |
| RGB Compressed DXT1 | 0.5 bpp (バイト/ピクセル) |
| RGBA Compressed DXT5 | 1 bpp |
| RGB Compressed ETC1 | 0.5 bpp |
| RGB Compressed PVRTC 2 bits | 0.25 bpp (バイト/ピクセル) |
| RGBA Compressed PVRTC 2 bits | 0.25 bpp |
| RGB Compressed PVRTC 4 bits | 0.5 bpp |
| RGBA Compressed PVRTC 4 bits | 0.5 bpp |
| RGB 16bit | 2 bpp |
| RGB 24bit | 3 bpp |
| Alpha 8bit | 1 bpp |
| RGBA 16bit | 2 bpp |
| RGBA 32bit | 4 bpp |
テクスチャの合計サイズの求め方: 幅 * 高さ * bpp。 ミニマップがある場合は 33% を追加します。
デフォルトでは、Unity はインポート時にすべてのテクスチャを圧縮します。 ワークフローをより高速にするため、 でこれをオフにできます。 ゲーム作成時、圧縮されていないすべてのテクスチャが圧縮されます。
メッシュおよびアニメーション サイズの最大化
ゲーム ファイルでの使用スペースを減らすよう、Meshes およびインポートされたアニメーション クリップを圧縮できます。 メッシュ インポート設定で圧縮をオンにできます。
ゲーム ファイルでの使用スペースを減らすよう、Meshes およびインポートされたアニメーション クリップを圧縮できます。 モデルで許容可能な圧縮のレベルを試してみてください。
メッシュ圧縮は小さいデータ ファイルしか生成せず、ランタイムで使用するメモリは少なくなります。 アニメーションKeyframe reductionは、小さいデータ ファイルしか生成せず、ランタイムで使用するメモリは少なくなり、一般に、常にキーフレーム削減を使用します。
また、メッシュで法線および/または接線を格納しないようにし、ランタイム時にゲーム ビルドとメモリにスペースを保存するよう選択できます。 これは、メッシュ インポート設定のTangent Space Generationドロップダウンで設定できます。 経験則:
- 接線は、法線マッピングに使用されます。 * 法線マッピングを使用しない場合は、おそらくこれらのメッシュで接線を格納する必要はありません。
- 接線は、ライティングに使用されます。 メッシュの一部でリアルタイムのライティングを使用しない場合は、おそらくこれらに法線を格納する必要はありません。
プレイヤーに含まれている dll を減らす
プレイヤー (デスクトップ、Android または iOS) System.dll または System.Xml.dll に依存しないことは重要です。 Unity は、プレイヤー インストールで System.dll または System.Xml.dll を含みません。 つまり、Xml または、System.dll にある一部の汎用コンテナを使用したい場合、必要な dll はプレイヤーに含まれます。 これは通常、ダウンロード サイズに 1mb を追加しますが、これは明らかにプレイヤーの配布にはあまりよくないので、回避する必要があります。 Xml ファイルを構文解析する必要がある場合、Mono.Xml.zip などのより小さい xml ライブラリを使用できます。 ほとんどの汎用コンテナは、mscorlib に含まれますが、Stack<> やその他の幾つかは、System.dll に含まれます。 そのため、本当にこれらを回避したいと思うでしょう。
お分かりのように、Unity には、プレイヤー作成時、System.Xml.dll および System.dll が含まれます
Unity には、プレイヤー配布 mscorlib.dll、Boo.Lang.dll、UnityScript.Lang.dll および UnityEngine.dll と共に、次の DLL が含まれます。
Page last updated: 2012-11-13Understanding Automatic Memory Management
When an object, string or array is created, the memory required to store it is allocated from a central pool called the heap. When the item is no longer in use, the memory it once occupied can be reclaimed and used for something else. In the past, it was typically up to the programmer to allocate and release these blocks of heap memory explicitly with the appropriate function calls. Nowadays, runtime systems like Unity's Mono engine manage memory for you automatically. Automatic memory management requires less coding effort than explicit allocation/release and greatly reduces the potential for memory leakage (the situation where memory is allocated but never subsequently released).
Value and Reference Types
When a function is called, the values of its parameters are copied to an area of memory reserved for that specific call. Data types that occupy only a few bytes can be copied very quickly and easily. However, it is common for objects, strings and arrays to be much larger and it would be very inefficient if these types of data were copied on a regular basis. Fortunately, this is not necessary; the actual storage space for a large item is allocated from the heap and a small "pointer" value is used to remember its location. From then on, only the pointer need be copied during parameter passing. As long as the runtime system can locate the item identified by the pointer, a single copy of the data can be used as often as necessary.
Types that are stored directly and copied during parameter passing are called value types. These include integers, floats, booleans and Unity's struct types (eg, Color and Vector3). Types that are allocated on the heap and then accessed via a pointer are called reference types, since the value stored in the variable merely "refers" to the real data. Examples of reference types include objects, strings and arrays.
Allocation and Garbage Collection
The memory manager keeps track of areas in the heap that it knows to be unused. When a new block of memory is requested (say when an object is instantiated), the manager chooses an unused area from which to allocate the block and then removes the allocated memory from the known unused space. Subsequent requests are handled the same way until there is no free area large enough to allocate the required block size. It is highly unlikely at this point that all the memory allocated from the heap is still in use. A reference item on the heap can only be accessed as long as there are still reference variables that can locate it. If all references to a memory block are gone (ie, the reference variables have been reassigned or they are local variables that are now out of scope) then the memory it occupies can safely be reallocated.
To determine which heap blocks are no longer in use, the memory manager searches through all currently active reference variables and marks the blocks they refer to as "live". At the end of the search, any space between the live blocks is considered empty by the memory manager and can be used for subsequent allocations. For obvious reasons, the process of locating and freeing up unused memory is known as garbage collection (or GC for short).
Optimization
Garbage collection is automatic and invisible to the programmer but the collection process actually requires significant CPU time behind the scenes. When used correctly, automatic memory management will generally equal or beat manual allocation for overall performance. However, it is important for the programmer to avoid mistakes that will trigger the collector more often than necessary and introduce pauses in execution.
There are some infamous algorithms that can be GC nightmares even though they seem innocent at first sight. Repeated string concatenation is a classic example:-
function ConcatExample(intArray: int[]) {
var line = intArray[0].ToString();
for (i = 1; i < intArray.Length; i++) {
line += ", " + intArray[i].ToString();
}
return line;
}The key detail here is that the new pieces don't get added to the string in place, one by one. What actually happens is that each time around the loop, the previous contents of the line variable become dead - a whole new string is allocated to contain the original piece plus the new part at the end. Since the string gets longer with increasing values of i, the amount of heap space being consumed also increases and so it is easy to use up hundreds of bytes of free heap space each time this function is called. If you need to concatenate many strings together then a much better option is the Mono library's System.Text.StringBuilder class.
However, even repeated concatenation won't cause too much trouble unless it is called frequently, and in Unity that usually implies the frame update. Something like:-
var scoreBoard: GUIText;
var score: int;
function Update() {
var scoreText: String = "Score: " + score.ToString();
scoreBoard.text = scoreText;
}...will allocate new strings each time Update is called and generate a constant trickle of new garbage. Most of that can be saved by updating the text only when the score changes:-
var scoreBoard: GUIText;
var scoreText: String;
var score: int;
var oldScore: int;
function Update() {
if (score != oldScore) {
scoreText = "Score: " + score.ToString();
scoreBoard.text = scoreText;
oldScore = score;
}
}Another potential problem occurs when a function returns an array value:-
function RandomList(numElements: int) {
var result = new float[numElements];
for (i = 0; i < numElements; i++) {
result[i] = Random.value;
}
return result;
}This type of function is very elegant and convenient when creating a new array filled with values. However, if it is called repeatedly then fresh memory will be allocated each time. Since arrays can be very large, the free heap space could get used up rapidly, resulting in frequent garbage collections. One way to avoid this problem is to make use of the fact that an array is a reference type. An array passed into a function as a parameter can be modified within that function and the results will remain after the function returns. A function like the one above can often be replaced with something like:-
function RandomList(arrayToFill: float[]) {
for (i = 0; i < arrayToFill.Length; i++) {
arrayToFill[i] = Random.value;
}
}This simply replaces the existing contents of the array with new values. Although this requires the initial allocation of the array to be done in the calling code (which looks slightly inelegant), the function will not generate any new garbage when it is called.
Requesting a Collection
As mentioned above, it is best to avoid allocations as far as possible. However, given that they can't be completely eliminated, there are two main strategies you can use to minimise their intrusion into gameplay:-
Small heap with fast and frequent garbage collection
This strategy is often best for games that have long periods of gameplay where a smooth framerate is the main concern. A game like this will typically allocate small blocks frequently but these blocks will be in use only briefly. The typical heap size when using this strategy on iOS is about 200KB and garbage collection will take about 5ms on an iPhone 3G. If the heap increases to 1MB, the collection will take about 7ms. It can therefore be advantageous sometimes to request a garbage collection at a regular frame interval. This will generally make collections happen more often than strictly necessary but they will be processed quickly and with minimal effect on gameplay:-
if (Time.frameCount % 30 == 0)
{
System.GC.Collect();
}
However, you should use this technique with caution and check the profiler statistics to make sure that it is really reducing collection time for your game.
Large heap with slow but infrequent garbage collection
This strategy works best for games where allocations (and therefore collections) are relatively infrequent and can be handled during pauses in gameplay. It is useful for the heap to be as large as possible without being so large as to get your app killed by the OS due to low system memory. However, the Mono runtime avoids expanding the heap automatically if at all possible. You can expand the heap manually by preallocating some placeholder space during startup (ie, you instantiate a "useless" object that is allocated purely for its effect on the memory manager):-
function Start() {
var tmp = new System.Object[1024];
// make allocations in smaller blocks to avoid them to be treated in a special way, which is designed for large blocks
for (var i : int = 0; i < 1024; i++)
tmp[i] = new byte[1024];
// release reference
tmp = null;
}
A sufficiently large heap should not get completely filled up between those pauses in gameplay that would accommodate a collection. When such a pause occurs, you can request a collection explicitly:-
Again, you should take care when using this strategy and pay attention to the profiler statistics rather than just assuming it is having the desired effect.
Reusable Object Pools
There are many cases where you can avoid generating garbage simply by reducing the number of objects that get created and destroyed. There are certain types of objects in games, such as projectiles, which may be encountered over and over again even though only a small number will ever be in play at once. In cases like this, it is often possible to reuse objects rather than destroy old ones and replace them with new ones.
See here for more information on Object Pools and their implementation.
Further Information
Memory management is a subtle and complex subject to which a great deal of academic effort has been devoted. If you are interested in learning more about it then memorymanagement.org is an excellent resource, listing many publications and online articles. Further information about object pooling can be found on the Wikipedia page and also at Sourcemaking.com.
Page last updated: 2012-07-30Platform Dependent Compilation
Unity includes a feature named "Platform Dependent Compilation". This consists of some preprocessor directives that let you partition your scripts to compile and execute a section of code exclusively for one of the supported platforms.
Furthermore, you can run this code within the Editor, so you can compile the code specifically for your mobile/console and test it in the Editor!
Platform Defines
The platform defines that Unity supports for your scripts are:
| UNITY_EDITOR | Define for calling Unity Editor scripts from your game code. |
| UNITY_STANDALONE_OSX | Platform define for compiling/executing code specifically for Mac OS (This includes Universal, PPC and Intel architectures). |
| UNITY_DASHBOARD_WIDGET | Platform define when creating code for Mac OS dashboard widgets. |
| UNITY_STANDALONE_WIN | Use this when you want to compile/execute code for Windows stand alone applications. |
| UNITY_STANDALONE_LINUX | Use this when you want to compile/execute code for Linux stand alone applications. |
| UNITY_WEBPLAYER | Platform define for web player content (this includes Windows and Mac Web player executables). |
| UNITY_WII | Platform define for compiling/executing code for the Wii console. |
| UNITY_IPHONE | Platform define for compiling/executing code for the iPhone platform. |
| UNITY_ANDROID | Platform define for the Android platform. |
| UNITY_PS3 | Platform define for running PlayStation 3 code. |
| UNITY_XBOX360 | Platform define for executing Xbox 360 code. |
| UNITY_NACL | Platform define when compiling code for Google native client (this will be set additionally to UNITY_WEBPLAYER). |
| UNITY_FLASH | Platform define when compiling code for Adobe Flash. |
Also you can compile code selectively depending on the version of the engine you are working on. Currently the supported ones are:
| UNITY_2_6 | Platform define for the major version of Unity 2.6. |
| UNITY_2_6_1 | Platform define for specific version 1 from the major release 2.6. |
| UNITY_3_0 | Platform define for the major version of Unity 3.0. |
| UNITY_3_0_0 | Platform define for the specific version 0 of Unity 3.0. |
| UNITY_3_1 | Platform define for major version of Unity 3.1. |
| UNITY_3_2 | Platform define for major version of Unity 3.2. |
| UNITY_3_3 | Platform define for major version of Unity 3.3. |
| UNITY_3_4 | Platform define for major version of Unity 3.4. |
| UNITY_3_5 | Platform define for major version of Unity 3.5. |
| UNITY_4_0 | Platform define for major version of Unity 4.0. |
Note: For versions before 2.6.0 there are no platform defines as this feature was first introduced in that version.
Testing precompiled code.
We are going to show a small example of how to use the precompiled code. This will simply print a message that depends on the platform you have selected to build your target.
First of all, select the platform you want to test your code against by clicking on . This will bring the build settings window to select your target platform.

Build Settings window with the WebPlayer Selected as Target platform.
Select the platform you want to test your precompiled code against and press the button to tell Unity which platform you are targeting.
Create a script and copy/paste this code:
JavaScript Example:
function Awake() {
#if UNITY_EDITOR
Debug.Log("Unity Editor");
#endif
#if UNITY_IPHONE
Debug.Log("Iphone");
#endif
#if UNITY_STANDALONE_OSX
Debug.Log("Stand Alone OSX");
#endif
#if UNITY_STANDALONE_WIN
Debug.Log("Stand Alone Windows");
#endif
}
C# Example:
using UnityEngine;
using System.Collections;
public class PlatformDefines : MonoBehaviour {
void Start () {
#if UNITY_EDITOR
Debug.Log("Unity Editor");
#endif
#if UNITY_IPHONE
Debug.Log("Iphone");
#endif
#if UNITY_STANDALONE_OSX
Debug.Log("Stand Alone OSX");
#endif
#if UNITY_STANDALONE_WIN
Debug.Log("Stand Alone Windows");
#endif
}
}
Boo Example:
import UnityEngine
class PlatformDefines (MonoBehaviour):
def Start ():
ifdef UNITY_EDITOR:
Debug.Log("Unity Editor")
ifdef UNITY_IPHONE:
Debug.Log("IPhone")
ifdef UNITY_STANDALONE_OSX:
Debug.Log("Stand Alone OSX")
ifdef not UNITY_IPHONE:
Debug.Log("not an iPhone")
Then, depending on which platform you selected, one of the messages will get printed on the Unity console when you press play.
In addition to the basic #if compiler directive, you can also use a multiway test in C# and JavaScript:-
#if UNITY_EDITOR
Debug.Log("Unity Editor");
#elif UNITY_IPHONE
Debug.Log("Unity iPhone");
#else
Debug.Log("Any other platform");
#endif
However, Boo currently supports only the ifdef directive.
Page last updated: 2012-11-28Generic Functions
Some functions in the script reference (for example, the various GetComponent functions) are listed with a variant that has a letter T or a type name in angle brackets after the function name:-
function FuncName.<T>(): T;
These are known as generic functions. The significance they have for scripting is that you get to specify the types of parameters and/or the return type when you call the function. In JavaScript, this can be used to get around the limitations of dynamic typing:-
// The type is correctly inferred since it is defined in the function call. var obj = GetComponent.<Rigidbody>();
In C#, it can save a lot of keystrokes and casts:-
Rigidbody rb = go.GetComponent<Rigidbody>(); // ...as compared with:- Rigidbody rb = (Rigidbody) go.GetComponent(typeof(Rigidbody));
Any function that has a generic variant listed on its script reference page will allow this special calling syntax.
Page last updated: 2011-08-05Debugging
ゲームを作成する際に、スクリプトまたはシーンの設定でのエラーにより、予定していない、かつ望んでいない動作が表示される場合があります (必然的に表示されます)。 このような望まない動作は、一般に bug と呼ばれ、これを修正する処理は、debugging と呼ばれます。 Unity では、ゲームのデバッグを行うために使用できる幾つかのメソッドを提供しています。 詳細については、次のページを参照してください。
Page last updated: 2012-11-09Console
Double-clicking an error in the Status Bar or choosing will bring up the Console.

Console in the editor.
The Console shows messages, warnings, errors, or debug output from your game. You can define your own messages to be sent to the Console using , , or . You can double-click any message to be taken to the script that caused the message. You also have a number of options on the Console Toolbar.

Console control toolbar helps your filter your debug output.
- Pressing will remove all current messages from the Console.
- When is enabled, identical messages will only be shown once.
- When is enabled, all messages will be removed from the Console every time you go into Play mode.
- When is enabled, will cause the pause to occur but will not.
- Pressing will open the Player Log in a text editor (or using the Console app on Mac if set as the default app for .log files).
- Pressing will open the Editor Log in a text editor (or using the Console app on Mac if set as the default app for .log files).
Debugger
Unity デバッガにより、ランタイム時にコードを検査できます。 例えば、関数を呼び出すタイミングやどの値と呼び出すかを決定するのに便利です。 また、ゲーム実行中の所定の時間にスクリプトの変数値を確認できます。 段階的にスクリプトを実行することで、スクリプト内のバグやロジックの問題を突き止めることができます。
Unity は、MonoDevelop IDE を使用して、ゲーム内のスクリプトをデバッグします。 エンジンがサポートしているすべての言語をデバッグできます。 (JavaScript、C# および Boo)。
デバッガは、すべてのコードとすべてのシンボルをロードする必要があるため、実行中にゲームのパフォーマンスに若干の影響を及ぼす場合があります。 通常、このオーバーヘッドはゲームのフレーム レートに影響するほど大きいものではありません。

Unity のスクリプトをデバッグする MonoDevelop ウィンドウ
Unity でのデバッギング
Windows では、ユーザーは、Unity のインストールの一部として、MonoDevelop をインストールするよう選択する必要があります (デフォルトで選択)。
- 以前にプロジェクトで MonoDevelop を使用していない場合、MonoDevelop プロジェクトを同期させます。 これにより、MonoDevelo 内のプロジェクトが開きます。
- 分析したい行をクリックすることで、スクリプト上に必要なブレーク ポイントを設定します。
- Unity またはプレイヤーを起動します。
- Unity: 設定ウィンドウでEditor Attachingにチェックを入れます。
- プレイヤー: Development buildおよびAllow script debuggingオプションを有効にして、プレイヤーが作成されていることを確認してください。For webplayers, additionally check that the development release channel setting is enabled on the player's context menu (right click on Windows or cmd-click on Mac OSX)

Enabling debugging in the webplayer
- MonoDevelop でプロジェクトを開きます。
- MonoDevelop で、ツールバーの [追加] ボタンをクリックするか、実行メニューから追加を選択します。
- 表示されるダイアログから、デバッグしたい項目を選択します。
- 注意:
- 以下の対象が現在サポートされています。 Unityエディタ、デスクトップ スタンドアロン プレイヤー、Android および iOS プレイヤー
- バックグラウンドでプレイヤーを実行するよう設定しない場合は (デフォルト)、リスト内でプレイヤーを表示させるためには、プレイヤーにフォーカスを数秒合わせる必要がある場合があります。
- スクリプト のデバッギングを有効にした場合、Android および iOS プレイヤーで、ネットワーキングを有効にする必要があります。 すべてのプレイヤーは、MonoDevelop を実行しているコンピュータと同じネットワーク サブネット上にある必要があります。
- 再生モードに入ると、スクリプト コードがデバッガで実行されます。
- ブレークポイントが発生すると、スクリプトの実行が停止し、MonoDevelop を使用して、スクリプトのステップ オーバー、ステップ イン、ステップ アウト、変数の検査、呼び出しスタックの試験を行うことができます。
- 注意: トップ レベルのメソッド (
Update()など) のデバッグを終了した場合、または次のブレークポイントに移りたい場合、関数の最後をステップ アウトまたはステップ オーバーする代わりに、Continueコマンドを使用することで、パフォーマンスが改善します。
- 注意: トップ レベルのメソッド (
- デバッグ終了時に、ツールバーの [解除] または [停止] ボタンをクリックするか、実行メニューから解除または停止を選択します。
ヒント
- Iウォッチをこの オブジェクトに追加する場合、スクリプトを追加するゲーム オブジェクトの内部値 (位置、スケール、回転) を検査できます。
iOS リモート デバッギングに関する指示
前述の指示に加え、Unity iOS アプリケーションで、デバッグをうまく行うには、更に必要な手順があります。
- WiFi ネットワークに iDevice を追加します (リモート プロファイリングの場合と同じ要件)。
- Unity エディタで作成と実行を押します。
- Xcode を介して、アプリケーションを作成、インストールおよび起動する場合、Xcode の停止をクリックします。
- iDevice でアプリケーションを手動で検索、起動します。 (注意: アプリケーションを Xcode を介して実行する場合は、ブレークポイントに達した後は再開できません)。
- アプリケーションを機器で実行している場合、MonoDevelop に切り替え、デバッギング ツールバーの追加アイコンをクリックします。 使用できるインスタンス リストから機器を選択します (複数のインスタンスが表示されている場合、一番下のインスタンスを選択します)。
Log Files
There might be times during the development when you need to obtain information from the logs of the webplayer you've built, your standalone player, the target device or the editor. Usually you need to see these files when you have experienced a problem and you have to know where exactly the problem occurred.
On Mac the webplayer, player and editor logs can be accessed uniformly through the standard utility.
On Windows the webplayer and editor logs are place in folders there are not shown in the Windows Explorer by default. Please see the Accessing hidden folders page to resolve that situation.
Editor
Editor log can be brought up through the button in Unity's Console window.
| Mac OS X | ~/Library/Logs/Unity/Editor.log |
| Windows XP * | C:\Documents and Settings\username\Local Settings\Application Data\Unity\Editor\Editor.log |
| Windows Vista/7 * | C:\Users\username\AppData\Local\Unity\Editor\Editor.log |
(*) On Windows the Editor log file is stored in the local application data folder: %LOCALAPPDATA%\Unity\Editor\Editor.log, where LOCALAPPDATA is defined by CSIDL_LOCAL_APPDATA.

Desktop
On Mac all the logs can be accessed uniformly through the standard utility.
Webplayer
| Mac OS X | ~/Library/Logs/Unity/WebPlayer.log |
| Windows XP * | C:\Documents and Settings\username\Local Settings\Temp\UnityWebPlayer\log\log_UNIQUEID.txt |
| Windows Vista/7 * | C:\Users\username\AppData\Local\Temp\UnityWebPlayer\log\log_UNIQUEID.txt |
| Windows Vista/7 + IE7 + UAC * | C:\Users\username\AppData\Local\Temp\Low\UnityWebPlayer\log\log_UNIQUEID.txt |
(*) On Windows the webplayer log is stored in a temporary folder: %TEMP%\UnityWebPlayer\log\log_UNIQUEID.txt, where TEMP is defined by GetTempPath.
Player
| Mac OS X | ~/Library/Logs/Unity/Player.log |
| Windows * | EXECNAME_Data\output_log.txt |
(*) EXECNAME_Data is a folder next to the executable with your game.
Note that on Windows standalones the location of the log file can be changed (or logging suppressed.) See the command line page for further details.

iOS
The device log can be accessed in XCode via GDB console or the Organizer Console. The latter is useful for getting crashlogs when your application was not running through the XCode debugger.
Please see Debugging Applications in the iOS Development Guide. Also our Troubleshooting and Bugreporting guides may be useful for you.

Android
The device log can be viewed by using the logcat console. Use the adb application found in Android SDK/platform-tools directory with a trailing logcat parameter:
$ adb logcat
Another way to inspect the LogCat is to use the Dalvik Debug Monitor Server (DDMS). DDMS can be started either from Eclipse or from inside the Android SDK/tools. DDMS also provides a number of other debug related tools.
Accessing Hidden Folders
On Windows the logs are stored in locations that are hidden by default. To enable navigating to them in the Windows Explorer please perform the steps below.
Show hidden folders on Windows XP
The Local Settings folder is hidden by default. In order to see it, you have to enable viewing of hidden folders in Windows Explorer from .

Enabling viewing of hidden folders in Windows XP
Show hidden folders on Windows Vista/7
The AppData folder is hidden by default. In order to see it, you have to enable viewing of hidden folders in Windows Explorer from . The Tools menu is hidden by default, but can be displayed by pressing the Alt key once.

Enabling viewing of hidden folders in Windows Vista
Plugins
Unity has extensive support for Plugins, which are libraries of native code written in C, C++, Objective-C, etc. Plugins allow your game code (written in Javascript, C# or Boo) to call functions from these libraries. This feature allows Unity to integrate with middleware libraries or existing C/C++ game code.
Note: On the desktop platforms, plugins are a pro-only feature. For security reasons, plugins are not usable with webplayers.
In order to use a plugin you need to do two things:-
- Write functions in a C-based language and compile them into a library.
- Create a C# script which calls functions in the library.
The plugin should provide a simple C interface which the C# script then exposes to other user scripts. It is also possible for Unity to call functions exported by the plugin when certain low-level rendering events happen (for example, when a graphics device is created), see the Native Plugin Interface page for details.
Here is a very simple example:
C File of a Minimal Plugin:
float FooPluginFunction () { return 5.0F; }
C# Script that Uses the Plugin:
using UnityEngine;
using System.Runtime.InteropServices;
class SomeScript : MonoBehaviour {
#if UNITY_IPHONE || UNITY_XBOX360
// On iOS and Xbox 360 plugins are statically linked into
// the executable, so we have to use __Internal as the
// library name.
[DllImport ("__Internal")]
#else
// Other platforms load plugins dynamically, so pass the name
// of the plugin's dynamic library.
[DllImport ("PluginName")]
#endif
private static extern float FooPluginFunction ();
void Awake () {
// Calls the FooPluginFunction inside the plugin
// And prints 5 to the console
print (FooPluginFunction ());
}
}
Note that when using Javascript you will need to use the following syntax, where DLLName is the name of the plugin you have written, or "__Internal" if you are writing statically linked native code:
@DllImport (DLLName)
static private function FooPluginFunction () : float {};
Creating a Plugin
In general, plugins are built with native code compilers on the target platform. Since plugin functions use a C-based call interface, you must avoid name mangling issues when using C++ or Objective-C.
For further details and examples, see the following pages:-
Further Information
- Native Plugin Interface - this is needed if you want to do rendering in your plugin.
- Mono Interop with native libraries.
- P-invoke documentation on MSDN.
PluginsForDesktop
This page describes Native Code Plugins for desktop platforms (Windows/Mac OS X/Linux). Note that plugins are intentionally disabled in webplayers for security reasons.
Building a Plugin for Mac OS X
On Mac OSX, plugins are deployed as bundles. You can create the bundle project with XCode by selecting and then selecting Bundle - Carbon/Cocoa Loadable Bundle.
If you are using C++ (.cpp) or Objective-C (.mm) to implement the plugin then you must ensure the functions are declared with C linkage to avoid name mangling issues.
extern "C" {
float FooPluginFunction ();
}
Building a Plugin for Windows
Plugins on Windows are DLL files with exported functions. Practically any language or development environment that can create DLL files can be used to create plugins.
As with Mac OSX, you should declare any C++ functions with C linkage to avoid name mangling issues.
Building a Plugin for Linux
Plugins on Linux are .so files with exported functions. These libraries are typically written in C or C++, but any language can be used.
As with the other platforms, you should declare any C++ functions with C linkage in order to avoid name mangling issues.
32-bit and 64-bit libraries
Currently, plugins for 32-bit and 64-bit players need to be managed manually, e.g, before building a 64-bit player, you need to copy the 64-bit library into the Assets/Plugins folder, and before building a 32-bit player, you need to copy the 32-bit library into the Assets/Plugins folder.
Using your plugin from C#
Once built, the bundle should be placed in the folder in the Unity project. Unity will then find it by name when you define a function like this in the C# script:-
[DllImport ("PluginName")]
private static extern float FooPluginFunction ();
Please note that PluginName should not include the library prefix nor file extension. For example, the actual name of the plugin file would be PluginName.dll on Windows and libPluginName.so on Linux.
Be aware that whenever you change code in the Plugin you will need to recompile scripts in your project or else the plugin will not have the latest compiled code.
Deployment
For cross platform plugins you must include the .bundle (for Mac), .dll (for Windows), and .so (for Linux) files in the Plugins folder.
No further work is then required on your side - Unity automatically picks the right plugin for the target platform and includes it with the player.
Examples
Simplest Plugin
This plugin project implements only some very basic operations (print a number, print a string, add two floats, add two integers). This example may be helpful if this is your first Unity plugin.
The project can be found here and includes Windows, Mac, and Linux project files.
Rendering from C++ code
An example multiplatform plugin that works with multithreaded rendering in Unity can be found on the Native Plugin Interface page.
Midi Plugin
A complete example of the Plugin interface can be found here.
This is a complete Midi plugin for OS X which uses Apple's CoreMidi API. It provides a simple C API and a C# class to access it from Unity. The C# class contains a high level API, with easy access to NoteOn and NoteOff events and their velocity.
Texture Plugin
An example of how to assign image data to a texture directly in OpenGL (note that this will only work when Unity is using an OpenGL renderer). This example includes both XCode and Visual Studio project files. The plugin, along with an accompanying Unity project, can be found here.
Page last updated: 2012-11-26PluginsForIOS
This page describes Native Code Plugins for the iOS platform.
Building an Application with a Native Plugin for iOS
- Define your extern method in the C# file as follows:
[DllImport ("__Internal")] private static extern float FooPluginFunction (); - Set the editor to the iOS build target
- Add your native code source files to the generated XCode project's "Classes" folder (this folder is not overwritten when the project is updated, but don't forget to backup your native code).
If you are using C++ (.cpp) or Objective-C (.mm) to implement the plugin you must ensure the functions are declared with C linkage to avoid name mangling issues.
extern "C" {
float FooPluginFunction ();
}
Using Your Plugin from C#
iOS native plugins can be called only when deployed on the actual device, so it is recommended to wrap all native code methods with an additional C# code layer. This code should check Application.platform and call native methods only when the app is running on the device; dummy values can be returned when the app runs in the Editor. See the Bonjour browser sample application for an example.
Calling C# / JavaScript back from native code
Unity iOS supports limited native-to-managed callback functionality via UnitySendMessage:UnitySendMessage("GameObjectName1", "MethodName1", "Message to send");
This function has three parameters : the name of the target GameObject, the script method to call on that object and the message string to pass to the called method.
Known limitations:
- Only script methods that correspond to the following signature can be called from native code:
function MethodName(message:string) - Calls to UnitySendMessage are asynchronous and have a delay of one frame.
Automated plugin integration
Unity iOS supports automated plugin integration in a limited way. All files with extensions .a,.m,.mm,.c,.cpp located in the Assets/Plugins/iOS folder will be merged into the generated Xcode project automatically. However, merging is done by symlinking files from Assets/Plugins/iOS to the final destination, which might affect some workflows. The .h files are not included in the Xcode project tree, but they appear on the destination file system, thus allowing compilation of .m/.mm/.c/.cpp files.
Note: subfolders are currently not supported.
iOS Tips
- Managed-to-unmanaged calls are quite processor intensive on iOS. Try to avoid calling multiple native methods per frame.
- As mentioned above, wrap your native methods with an additional C# layer that calls native code on the device and returns dummy values in the Editor.
- String values returned from a native method should be UTF-8 encoded and allocated on the heap. Mono marshaling calls are free for strings like this.
- As mentioned above, the XCode project's "Classes" folder is a good place to store your native code because it is not overwritten when the project is updated.
- Another good place for storing native code is the Assets folder or one of its subfolders. Just add references from the XCode project to the native code files: right click on the "Classes" subfolder and choose "Add->Existing files...".
Examples
Bonjour Browser Sample
A simple example of the use of a native code plugin can be found here
This sample demonstrates how objective-C code can be invoked from a Unity iOS application. This application implements a very simple Bonjour client. The application consists of a Unity iOS project (Plugins/Bonjour.cs is the C# interface to the native code, while BonjourTest.js is the JS script that implements the application logic) and native code (Assets/Code) that should be added to the built XCode project.
Page last updated: 2011-11-01PluginsForAndroid
This page describes Native Code Plugins for Android.
Building a Plugin for Android
To build a plugin for Android, you should first obtain the Android NDK and familiarize yourself with the steps involved in building a shared library.
If you are using C++ (.cpp) to implement the plugin you must ensure the functions are declared with C linkage to avoid name mangling issues.
extern "C" {
float FooPluginFunction ();
}
Using Your Plugin from C#
Once built, the shared library should be copied to the folder. Unity will then find it by name when you define a function like the following in the C# script:-
[DllImport ("PluginName")]
private static extern float FooPluginFunction ();
Please note that PluginName should not include the prefix ('lib') nor the extension ('.so') of the filename. It is advisable to wrap all native code methods with an additional C# code layer. This code should check Application.platform and call native methods only when the app is running on the actual device; dummy values can be returned from the C# code when running in the Editor. You can also use platform defines to control platform dependent code compilation.
Deployment
For cross platform deployment, your project should include plugins for each supported platform (ie, libPlugin.so for Android, Plugin.bundle for Mac and Plugin.dll for Windows). Unity automatically picks the right plugin for the target platform and includes it with the player.
Using Java Plugins
The Android plugin mechanism also allows Java to be used to enable interaction with the Android OS.
Building a Java Plugin for Android
There are several ways to create a Java plugin but the result in each case is that you end up with a .jar file containing the .class files for your plugin. One approach is to download the JDK, then compile your .java files from the command line with javac. This will create .class files which you can then package into a .jar with the jar command line tool. Another option is to use the Eclipse IDE together with the ADT.
Using Your Java Plugin from Native Code
Once you have built your Java plugin (.jar) you should copy it to the folder in the Unity project. Unity will package your .class files together with the rest of the Java code and then access the code using the Java Native Interface (JNI). JNI is used both when calling native code from Java and when interacting with Java (or the JavaVM) from native code.
To find your Java code from the native side you need access to the Java VM. Fortunately, that access can be obtained easily by adding a function like this to your C/C++ code:
jint JNI_OnLoad(JavaVM* vm, void* reserved) {
JNIEnv* jni_env = 0;
vm->AttachCurrentThread(&jni_env, 0);
}
This is all that is needed to start using Java from C/C++. It is beyond the scope of this document to explain JNI completely. However, using it usually involves finding the class definition, resolving the constructor (<init>) method and creating a new object instance, as shown in this example:-
jobject createJavaObject(JNIEnv* jni_env) {
jclass cls_JavaClass = jni_env->FindClass("com/your/java/Class"); // find class definition
jmethodID mid_JavaClass = jni_env->GetMethodID (cls_JavaClass, "<init>", "()V"); // find constructor method
jobject obj_JavaClass = jni_env->NewObject(cls_JavaClass, mid_JavaClass); // create object instance
return jni_env->NewGlobalRef(obj_JavaClass); // return object with a global reference
}
Using Your Java Plugin with helper classes
AndroidJNIHelper and AndroidJNI can be used to ease some of the pain with raw JNI.
AndroidJavaObject and AndroidJavaClass automate a lot of tasks and also use cacheing to make calls to Java faster. The combination of AndroidJavaObject and AndroidJavaClass builds on top of AndroidJNI and AndroidJNIHelper, but also has a lot of logic in its own right (to handle the automation). These classes also come in a 'static' version to access static members of Java classes.
You can choose whichever approach you prefer, be it raw JNI through AndroidJNI class methods, or AndroidJNIHelper together with AndroidJNI and eventually AndroidJavaObject/AndroidJavaClass for maximum automation and convenience.
UnityEngine.AndroidJNI is a wrapper for the JNI calls available in C (as described above). All methods in this class are static and have a 1:1 mapping to the Java Native Interface. UnityEngine.AndroidJNIHelper provides helper functionality used by the next level, but is exposed as public methods because they may be useful for some special cases.
Instances of UnityEngine.AndroidJavaObject and UnityEngine.AndroidJavaClass have a 1:1 mapping to an instance of java.lang.Object and java.lang.Class (or subclasses thereof) on the Java side, respectively. They essentially provide 3 types of interaction with the Java side:
- Call a method
- Get the value of a field
- Set the value of a field
The Call is separated into two categories: Call to a 'void' method, and Call to a method with non-void return type. A generic type is used to represent the return type of those methods which return a non-void type. The Get and Set always take a generic type representing the field type.
Example 1
//The comments describe what you would need to do if you were using raw JNI
AndroidJavaObject jo = new AndroidJavaObject("java.lang.String", "some_string");
// jni.FindClass("java.lang.String");
// jni.GetMethodID(classID, "<init>", "(Ljava/lang/String;)V");
// jni.NewStringUTF("some_string");
// jni.NewObject(classID, methodID, javaString);
int hash = jo.Call<int>("hashCode");
// jni.GetMethodID(classID, "hashCode", "()I");
// jni.CallIntMethod(objectID, methodID);
Here, we're creating an instance of java.lang.String, initialized with a string of our choice and retrieving the hash value for that string.
The AndroidJavaObject constructor takes at least one parameter, the name of class for which we want to construct an instance. Any parameters after the class name are for the constructor call on the object, in this case the string "some_string". The subsequent Call to hashCode() returns an 'int' which is why we use that as the generic type parameter to the Call method.
Note: You cannot instantiate a nested Java class using dotted notation. Inner classes must use the $ separator, and it should work in both dotted and slashed format. So android.view.ViewGroup$LayoutParams or android/view/ViewGroup$LayoutParams can be used, where a LayoutParams class is nested in a ViewGroup class.
Example 2
One of the plugin samples above shows how to get the cache directory for the current application. This is how you would do the same thing from C# without any plugins:-
AndroidJavaClass jc = new AndroidJavaClass("com.unity3d.player.UnityPlayer");
// jni.FindClass("com.unity3d.player.UnityPlayer");
AndroidJavaObject jo = jc.GetStatic<AndroidJavaObject>("currentActivity");
// jni.GetStaticFieldID(classID, "Ljava/lang/Object;");
// jni.GetStaticObjectField(classID, fieldID);
// jni.FindClass("java.lang.Object");
Debug.Log(jo.Call<AndroidJavaObject>("getCacheDir").Call<string>("getCanonicalPath"));
// jni.GetMethodID(classID, "getCacheDir", "()Ljava/io/File;"); // or any baseclass thereof!
// jni.CallObjectMethod(objectID, methodID);
// jni.FindClass("java.io.File");
// jni.GetMethodID(classID, "getCanonicalPath", "()Ljava/lang/String;");
// jni.CallObjectMethod(objectID, methodID);
// jni.GetStringUTFChars(javaString);
In this case, we start with AndroidJavaClass instead of AndroidJavaObject because we want to access a static member of com.unity3d.player.UnityPlayer rather than create a new object (an instance is created automatically by the Android UnityPlayer). Then we access the static field "currentActivity" but this time we use AndroidJavaObject as the generic parameter. This is because the actual field type (android.app.Activity) is a subclass of java.lang.Object, and any non-primitive type must be accessed as AndroidJavaObject. The exceptions to this rule are strings, which can be accessed directly even though they don't represent a primitive type in Java.
After that it is just a matter of traversing the Activity through getCacheDir() to get the File object representing the cache directory, and then calling getCanonicalPath() to get a string representation.
Of course, nowadays you don't need to do that to get the cache directory since Unity provides access to the application's cache and file directory with Application.temporaryCachePath and Application.persistentDataPath.
Example 3
Finally, here is a trick for passing data from Java to script code using UnitySendMessage.
using UnityEngine;
public class NewBehaviourScript : MonoBehaviour {
void Start () {
JNIHelper.debug = true;
using (JavaClass jc = new JavaClass("com.unity3d.player.UnityPlayer")) {
jc.CallStatic("UnitySendMessage", "Main Camera", "JavaMessage", "whoowhoo");
}
}
void JavaMessage(string message) {
Debug.Log("message from java: " + message);
}
}
The Java class com.unity3d.player.UnityPlayer now has a static method UnitySendMessage, equivalent to the iOS UnitySendMessage on the native side. It can be used in Java to pass data to script code.
Here though, we call it directly from script code, which essentially relays the message on the Java side. This then calls back to the native/Unity code to deliver the message to the object named "Main Camera". This object has a script attached which contains a method called "JavaMessage".
Best practice when using Java plugins with Unity
As this section is mainly aimed at people who don't have comprehensive JNI, Java and Android experience, we assume that the AndroidJavaObject/AndroidJavaClass approach has been used for interacting with Java code from Unity.
The first thing to note is that any operation you perform on an AndroidJavaObject or AndroidJavaClass is computationally expensive (as is the raw JNI approach). It is highly advisable to keep the number of transitions between managed and native/Java code to a minimum, for the sake of performance and also code clarity.
You could have a Java method to do all the actual work and then use AndroidJavaObject / AndroidJavaClass to communicate with that method and get the result. However, it is worth bearing in mind that the JNI helper classes try to cache as much data as possible to improve performance.
//The first time you call a Java function like
AndroidJavaObject jo = new AndroidJavaObject("java.lang.String", "some_string"); // somewhat expensive
int hash = jo.Call<int>("hashCode"); // first time - expensive
int hash = jo.Call<int>("hashCode"); // second time - not as expensive as we already know the java method and can call it directly
The Mono garbage collector should release all created instances of AndroidJavaObject and AndroidJavaClass after use, but it is advisable to keep them in a using(){} statement to ensure they are deleted as soon as possible. Without this, you cannot be sure when they will be destroyed. If you set AndroidJNIHelper.debug to true, you will see a record of the garbage collector's activity in the debug output.
//Getting the system language with the safe approach
void Start () {
using (AndroidJavaClass cls = new AndroidJavaClass("java.util.Locale")) {
using(AndroidJavaObject locale = cls.CallStatic<AndroidJavaObject>("getDefault")) {
Debug.Log("current lang = " + locale.Call<string>("getDisplayLanguage"));
}
}
}
You can also call the .Dispose() method directly to ensure there are no Java objects lingering. The actual C# object might live a bit longer, but will be garbage collected by mono eventually.
Extending the UnityPlayerActivity Java Code
With Unity Android it is possible to extend the standard UnityPlayerActivity class (the primary Java class for the Unity Player on Android, similar to AppController.mm on Unity iOS).
An application can override any and all of the basic interaction between Android OS and Unity Android. You can enable this by creating a new Activity which derives from UnityPlayerActivity (UnityPlayerActivity.java can be found at on Mac and usually at on Windows).
To do this, first locate the shipped with Unity Android. It is found in the installation folder (usually (on Windows) or (on Mac)) in a sub-folder called . Then add to the classpath used to compile the new Activity. The resulting .class file(s) should be compressed into a .jar file and placed in the folder. Since the manifest dictates which activity to launch it is also necessary to create a new AndroidManifest.xml. The AndroidManifest.xml file should also be placed in the folder.
The new activity could look like the following example, OverrideExample.java:
package com.company.product;
import com.unity3d.player.UnityPlayerActivity;
import android.os.Bundle;
import android.util.Log;
public class OverrideExample extends UnityPlayerActivity {
protected void onCreate(Bundle savedInstanceState) {
// call UnityPlayerActivity.onCreate()
super.onCreate(savedInstanceState);
// print debug message to logcat
Log.d("OverrideActivity", "onCreate called!");
}
public void onBackPressed()
{
// instead of calling UnityPlayerActivity.onBackPressed() we just ignore the back button event
// super.onBackPressed();
}
}
And this is what the corresponding AndroidManifest.xml would look like:
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.company.product">
<application android:icon="@drawable/app_icon" android:label="@string/app_name">
<activity android:name=".OverrideExample"
android:label="@string/app_name"
android:configChanges="fontScale|keyboard|keyboardHidden|locale|mnc|mcc|navigation|orientation|screenLayout|screenSize|smallestScreenSize|uiMode|touchscreen">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
UnityPlayerNativeActivity
It is also possible to create your own subclass of UnityPlayerNativeActivity. This will have much the same effect as subclassing UnityPlayerActivity but with improved input latency. Be aware, though, that NativeActivity was introduced in Gingerbread and does not work with older devices. Since touch/motion events are processed in native code, Java views would normally not see those events. There is, however, a forwarding mechanism in Unity which allows events to be propagated to the DalvikVM. To access this mechanism, you need to modify the manifest file as follows:-
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.company.product">
<application android:icon="@drawable/app_icon" android:label="@string/app_name">
<activity android:name=".OverrideExampleNative"
android:label="@string/app_name"
android:configChanges="fontScale|keyboard|keyboardHidden|locale|mnc|mcc|navigation|orientation|screenLayout|screenSize|smallestScreenSize|uiMode|touchscreen">
<meta-data android:name="android.app.lib_name" android:value="unity" />
<meta-data android:name="unityplayer.ForwardNativeEventsToDalvik" android:value="true" />
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Note the ".OverrideExampleNative" attribute in the activity element and the two additional meta-data elements. The first meta-data is an instruction to use the Unity library libunity.so. The second enables events to be passed on to your custom subclass of UnityPlayerNativeActivity.
Examples
Native Plugin Sample
A simple example of the use of a native code plugin can be found here
This sample demonstrates how C code can be invoked from a Unity Android application. The package includes a scene which displays the sum of two values as calculated by the native plugin. Please note that you will need the Android NDK to compile the plugin.
Java Plugin Sample
An example of the use of Java code can be found here
This sample demonstrates how Java code can be used to interact with the Android OS and how C++ creates a bridge between C# and Java. The scene in the package displays a button which when clicked fetches the application cache directory, as defined by the Android OS. Please note that you will need both the JDK and the Android NDK to compile the plugins.
Here is a similar example but based on a prebuilt JNI library to wrap the native code into C#.
Page last updated: 2012-09-25NativePluginInterface
In addition to the basic script interface, Native Code Plugins in Unity can receive callbacks when certain events happen. This is mostly used to implement low-level rendering in your plugin and enable it to work with Unity's multithreaded rendering.
Note: The rendering callbacks to plugins are not currently supported on mobile platforms.
Access to the Graphics Device
A plugin can receive notification about events on the graphics device by exporting a UnitySetGraphicsDevice function. This will be called when the graphics device is created, before it is destroyed, and also before and after the device is "reset" (this only happens with Direct3D 9). The function has parameters which will receive the device pointer, device type and the kind of event that is taking place.
// If exported by a plugin, this function will be called when graphics device is created, destroyed,
// and before and after it is reset (ie, resolution changed).
extern "C" void EXPORT_API UnitySetGraphicsDevice (void* device, int deviceType, int eventType);
Possible values for deviceType:
enum GfxDeviceRenderer {
kGfxRendererOpenGL = 0, // OpenGL
kGfxRendererD3D9 = 1, // Direct3D 9
kGfxRendererD3D11 = 2, // Direct3D 11
kGfxRendererGCM = 3, // Sony PlayStation 3 GCM
kGfxRendererNull = 4, // "null" device (used in batch mode)
kGfxRendererHollywood = 5, // Nintendo Wii
kGfxRendererXenon = 6, // Xbox 360
kGfxRendererOpenGLES = 7, // OpenGL ES 1.1
kGfxRendererOpenGLES20Mobile = 8, // OpenGL ES 2.0 mobile variant
kGfxRendererMolehill = 9, // Flash 11 Stage3D
kGfxRendererOpenGLES20Desktop = 10, // OpenGL ES 2.0 desktop variant (i.e. NaCl)
};
Possible values for eventType:
enum GfxDeviceEventType {
kGfxDeviceEventInitialize = 0,
kGfxDeviceEventShutdown = 1,
kGfxDeviceEventBeforeReset = 2,
kGfxDeviceEventAfterReset = 3,
};
Plugin Callbacks on the Rendering Thread
Rendering in Unity can be multithreaded if the platform and number of available CPUs will allow for it. When multithreaded rendering is used, the rendering API commands happen on a thread which is completely separate from the one that runs MonoBehaviour scripts. Consequently, it is not always possible for your plugin to start doing some rendering immediately, since might interfere with whatever the render thread is doing at the time.
In order to do any rendering from the plugin, you should call GL.IssuePluginEvent from your script, which will cause your plugin to be called from the render thread. For example, if you call GL.IssuePluginEvent from the camera's OnPostRender function, you get a plugin callback immediately after the camera has finished rendering.
// If exported by a plugin, this function will be called for GL.IssuePluginEvent script calls.
// The function will be called on a rendering thread; note that when multithreaded rendering is used,
// the render thread WILL BE DIFFERENT from the main thread, on which all scripts & other game logic are executed!
// You have responsibility for ensuring any necessary synchronization with other plugin script calls takes place.
extern "C" void EXPORT_API UnityRenderEvent (int eventID);
Example
An example of a low-level rendering plugin can be downloaded here. It demonstrates two things:
- Renders a rotating triangle from C++ code after all regular rendering is done.
- Fills a procedural texture from C++ code, using Texture.GetNativeTexturePtr to access it.
The project works with Windows (Visual Studio 2008) and Mac OS X (Xcode 3.2) and uses Direct3D 9, Direct3D 11 or OpenGL depending on the platform. Direct3D 9 code part also demonstrates how to handle "lost" devices.
Page last updated: 2012-11-26TextualSceneFormat
As well as the default binary format, Unity also provides a textual format for scene data. This can be useful when working with version control software, since textual files generated separately can be merged more easily than binary files. Also, the text data can be generated and parsed by tools, making it possible to create and analyze scenes automatically. The pages in this section provide some reference material for working with the format.
Page last updated: 2011-10-13FormatDescription
Unity's scene format is implemented with the YAML data serialization language. While we can't cover YAML in depth here, it is an open format and its specification is available for free at the YAML website. Each object in the scene is written to the file as a separate YAML document, which is introduced in the file by the --- sequence. Note that in this context, the term "object" refers to GameObjects, Components and other scene data collectively; each of these items requires its own YAML document in the scene file. The basic structure of a serialized object can be understood from an example:-
--- !u!1 &6
GameObject:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
importerVersion: 3
m_Component:
- 4: {fileID: 8}
- 33: {fileID: 12}
- 65: {fileID: 13}
- 23: {fileID: 11}
m_Layer: 0
m_Name: Cube
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
The first line contains the string "!u!1 &6" after the document marker. The first number after the "!u!" part indicates the class of the object (in this case, it is a GameObject). The number following the ampersand is an object ID number which is unique within the file, although the number is assigned to each object arbitrarily. Each of the object's serializable properties is denoted by a line like the following:-
m_Name: Cube
Properties are typically prefixed with "m_" but otherwise follow the name of the property as defined in the script reference. A second object, defined further down in the file, might look something like this:-
--- !u!4 &8
Transform:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_GameObject: {fileID: 6}
m_LocalRotation: {x: 0.000000, y: 0.000000, z: 0.000000, w: 1.000000}
m_LocalPosition: {x: -2.618721, y: 1.028581, z: 1.131627}
m_LocalScale: {x: 1.000000, y: 1.000000, z: 1.000000}
m_Children: []
m_Father: {fileID: 0}
This is a Transform component attached to the GameObject defined by the YAML document above. The attachment is denoted by the line:-
m_GameObject: {fileID: 6}
...since the GameObject's object ID within the file was 6.
Floating point numbers can be represented in a decimal representation or as a hexadecimal number in IEE 754 format (denoted by a 0x prefix). The IEE 754 representation is used for lossless encoding of values, and is used by Unity when writing floating point values which don't have a short decimal representation. When Unity writes numbers in hexadecimal, it will always also write the decimal format in parentheses for debugging purposes, but only the hex is actually parsed when loading the file. If you wish to edit such values manually, simply remove the hex and enter only a decimal number. Here are some valid representations of floating point values (all representing the number one):
myValue: 0x3F800000 myValue: 1 myValue: 1.000 myValue: 0x3f800000(1) myValue: 0.1e1Page last updated: 2012-01-06
YAMLSceneExample
An Example of a YAML Scene File
An example of a simple but complete scene is given below. The scene contains just a camera and a cube object. Note that the file must start with the two lines
%YAML 1.1 %TAG !u! tag:unity3d.com,2011:
...in order to be accepted by Unity. Otherwise, the import process is designed to be tolerant of omissions - default values will be supplied for missing property data as far as possible.
%YAML 1.1
%TAG !u! tag:unity3d.com,2011:
--- !u!header
SerializedFile:
m_TargetPlatform: 4294967294
m_UserInformation:
--- !u!29 &1
Scene:
m_ObjectHideFlags: 0
m_PVSData:
m_QueryMode: 1
m_PVSObjectsArray: []
m_PVSPortalsArray: []
m_ViewCellSize: 1.000000
--- !u!104 &2
RenderSettings:
m_Fog: 0
m_FogColor: {r: 0.500000, g: 0.500000, b: 0.500000, a: 1.000000}
m_FogMode: 3
m_FogDensity: 0.010000
m_LinearFogStart: 0.000000
m_LinearFogEnd: 300.000000
m_AmbientLight: {r: 0.200000, g: 0.200000, b: 0.200000, a: 1.000000}
m_SkyboxMaterial: {fileID: 0}
m_HaloStrength: 0.500000
m_FlareStrength: 1.000000
m_HaloTexture: {fileID: 0}
m_SpotCookie: {fileID: 0}
m_ObjectHideFlags: 0
--- !u!127 &3
GameManager:
m_ObjectHideFlags: 0
--- !u!157 &4
LightmapSettings:
m_ObjectHideFlags: 0
m_LightProbeCloud: {fileID: 0}
m_Lightmaps: []
m_LightmapsMode: 1
m_BakedColorSpace: 0
m_UseDualLightmapsInForward: 0
m_LightmapEditorSettings:
m_Resolution: 50.000000
m_LastUsedResolution: 0.000000
m_TextureWidth: 1024
m_TextureHeight: 1024
m_BounceBoost: 1.000000
m_BounceIntensity: 1.000000
m_SkyLightColor: {r: 0.860000, g: 0.930000, b: 1.000000, a: 1.000000}
m_SkyLightIntensity: 0.000000
m_Quality: 0
m_Bounces: 1
m_FinalGatherRays: 1000
m_FinalGatherContrastThreshold: 0.050000
m_FinalGatherGradientThreshold: 0.000000
m_FinalGatherInterpolationPoints: 15
m_AOAmount: 0.000000
m_AOMaxDistance: 0.100000
m_AOContrast: 1.000000
m_TextureCompression: 0
m_LockAtlas: 0
--- !u!196 &5
NavMeshSettings:
m_ObjectHideFlags: 0
m_BuildSettings:
cellSize: 0.200000
cellHeight: 0.100000
agentSlope: 45.000000
agentClimb: 0.900000
ledgeDropHeight: 0.000000
maxJumpAcrossDistance: 0.000000
agentRadius: 0.400000
agentHeight: 1.800000
maxEdgeLength: 12
maxSimplificationError: 1.300000
regionMinSize: 8
regionMergeSize: 20
tileSize: 500
detailSampleDistance: 6.000000
detailSampleMaxError: 1.000000
accuratePlacement: 0
m_NavMesh: {fileID: 0}
--- !u!1 &6
GameObject:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
importerVersion: 3
m_Component:
- 4: {fileID: 8}
- 33: {fileID: 12}
- 65: {fileID: 13}
- 23: {fileID: 11}
m_Layer: 0
m_Name: Cube
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!1 &7
GameObject:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
importerVersion: 3
m_Component:
- 4: {fileID: 9}
- 20: {fileID: 10}
- 92: {fileID: 15}
- 124: {fileID: 16}
- 81: {fileID: 14}
m_Layer: 0
m_Name: Main Camera
m_TagString: MainCamera
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!4 &8
Transform:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_GameObject: {fileID: 6}
m_LocalRotation: {x: 0.000000, y: 0.000000, z: 0.000000, w: 1.000000}
m_LocalPosition: {x: -2.618721, y: 1.028581, z: 1.131627}
m_LocalScale: {x: 1.000000, y: 1.000000, z: 1.000000}
m_Children: []
m_Father: {fileID: 0}
--- !u!4 &9
Transform:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_GameObject: {fileID: 7}
m_LocalRotation: {x: 0.000000, y: 0.000000, z: 0.000000, w: 1.000000}
m_LocalPosition: {x: 0.000000, y: 1.000000, z: -10.000000}
m_LocalScale: {x: 1.000000, y: 1.000000, z: 1.000000}
m_Children: []
m_Father: {fileID: 0}
--- !u!20 &10
Camera:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_GameObject: {fileID: 7}
m_Enabled: 1
importerVersion: 2
m_ClearFlags: 1
m_BackGroundColor: {r: 0.192157, g: 0.301961, b: 0.474510, a: 0.019608}
m_NormalizedViewPortRect:
importerVersion: 2
x: 0.000000
y: 0.000000
width: 1.000000
height: 1.000000
near clip plane: 0.300000
far clip plane: 1000.000000
field of view: 60.000000
orthographic: 0
orthographic size: 100.000000
m_Depth: -1.000000
m_CullingMask:
importerVersion: 2
m_Bits: 4294967295
m_RenderingPath: -1
m_TargetTexture: {fileID: 0}
m_HDR: 0
--- !u!23 &11
Renderer:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_GameObject: {fileID: 6}
m_Enabled: 1
m_CastShadows: 1
m_ReceiveShadows: 1
m_LightmapIndex: 255
m_LightmapTilingOffset: {x: 1.000000, y: 1.000000, z: 0.000000, w: 0.000000}
m_Materials:
- {fileID: 10302, guid: 0000000000000000e000000000000000, type: 0}
m_SubsetIndices:
m_StaticBatchRoot: {fileID: 0}
m_LightProbeAnchor: {fileID: 0}
m_UseLightProbes: 0
m_ScaleInLightmap: 1.000000
--- !u!33 &12
MeshFilter:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_GameObject: {fileID: 6}
m_Mesh: {fileID: 10202, guid: 0000000000000000e000000000000000, type: 0}
--- !u!65 &13
BoxCollider:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_GameObject: {fileID: 6}
m_Material: {fileID: 0}
m_IsTrigger: 0
m_Enabled: 1
importerVersion: 2
m_Size: {x: 1.000000, y: 1.000000, z: 1.000000}
m_Center: {x: 0.000000, y: 0.000000, z: 0.000000}
--- !u!81 &14
AudioListener:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_GameObject: {fileID: 7}
m_Enabled: 1
--- !u!92 &15
Behaviour:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_GameObject: {fileID: 7}
m_Enabled: 1
--- !u!124 &16
Behaviour:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 0}
m_GameObject: {fileID: 7}
m_Enabled: 1
--- !u!1026 &17
HierarchyState:
m_ObjectHideFlags: 0
expanded: []
selection: []
scrollposition_x: 0.000000
scrollposition_y: 0.000000
Page last updated: 2011-10-13
ClassIDReference
A reference of common class ID numbers used by the YAML file format is given below, both in numerical order of class IDs and alphabetical order of class names. Note that some ranges of numbers are intentionally omitted from the sequence - these may represent classes that have been removed from the API or may be reserved for future use. Classes defined from scripts will always have class ID 114 (MonoBehaviour).
Classes Ordered by ID Number
1 GameObject
2 Component
3 LevelGameManager
4 Transform
5 TimeManager
6 GlobalGameManager
8 Behaviour
9 GameManager
11 AudioManager
12 ParticleAnimator
13 InputManager
15 EllipsoidParticleEmitter
17 Pipeline
18 EditorExtension
20 Camera
21 Material
23 MeshRenderer
25 Renderer
26 ParticleRenderer
27 Texture
28 Texture2D
29 Scene
30 RenderManager
33 MeshFilter
41 OcclusionPortal
43 Mesh
45 Skybox
47 QualitySettings
48 Shader
49 TextAsset
52 NotificationManager
54 Rigidbody
55 PhysicsManager
56 Collider
57 Joint
59 HingeJoint
64 MeshCollider
65 BoxCollider
71 AnimationManager
74 AnimationClip
75 ConstantForce
76 WorldParticleCollider
78 TagManager
81 AudioListener
82 AudioSource
83 AudioClip
84 RenderTexture
87 MeshParticleEmitter
88 ParticleEmitter
89 Cubemap
92 GUILayer
94 ScriptMapper
96 TrailRenderer
98 DelayedCallManager
102 TextMesh
104 RenderSettings
108 Light
109 CGProgram
111 Animation
114 MonoBehaviour
115 MonoScript
116 MonoManager
117 Texture3D
119 Projector
120 LineRenderer
121 Flare
122 Halo
123 LensFlare
124 FlareLayer
125 HaloLayer
126 NavMeshLayers
127 HaloManager
128 Font
129 PlayerSettings
130 NamedObject
131 GUITexture
132 GUIText
133 GUIElement
134 PhysicMaterial
135 SphereCollider
136 CapsuleCollider
137 SkinnedMeshRenderer
138 FixedJoint
140 RaycastCollider
141 BuildSettings
142 AssetBundle
143 CharacterController
144 CharacterJoint
145 SpringJoint
146 WheelCollider
147 ResourceManager
148 NetworkView
149 NetworkManager
150 PreloadData
152 MovieTexture
153 ConfigurableJoint
154 TerrainCollider
155 MasterServerInterface
156 TerrainData
157 LightmapSettings
158 WebCamTexture
159 EditorSettings
160 InteractiveCloth
161 ClothRenderer
163 SkinnedCloth
164 AudioReverbFilter
165 AudioHighPassFilter
166 AudioChorusFilter
167 AudioReverbZone
168 AudioEchoFilter
169 AudioLowPassFilter
170 AudioDistortionFilter
180 AudioBehaviour
181 AudioFilter
182 WindZone
183 Cloth
184 SubstanceArchive
185 ProceduralMaterial
186 ProceduralTexture
191 OffMeshLink
192 OcclusionArea
193 Tree
194 NavMesh
195 NavMeshAgent
196 NavMeshSettings
197 LightProbeCloud
198 ParticleSystem
199 ParticleSystemRenderer
205 LODGroup
220 LightProbeGroup
1001 Prefab
1002 EditorExtensionImpl
1003 AssetImporter
1004 AssetDatabase
1005 Mesh3DSImporter
1006 TextureImporter
1007 ShaderImporter
1020 AudioImporter
1026 HierarchyState
1027 GUIDSerializer
1028 AssetMetaData
1029 DefaultAsset
1030 DefaultImporter
1031 TextScriptImporter
1034 NativeFormatImporter
1035 MonoImporter
1037 AssetServerCache
1038 LibraryAssetImporter
1040 ModelImporter
1041 FBXImporter
1042 TrueTypeFontImporter
1044 MovieImporter
1045 EditorBuildSettings
1046 DDSImporter
1048 InspectorExpandedState
1049 AnnotationManager
1050 MonoAssemblyImporter
1051 EditorUserBuildSettings
1052 PVRImporter
1112 SubstanceImporter
Classes Ordered Alphabetically
Animation 111
AnimationClip 74
AnimationManager 71
AnnotationManager 1049
AssetBundle 142
AssetDatabase 1004
AssetImporter 1003
AssetMetaData 1028
AssetServerCache 1037
AudioBehaviour 180
AudioChorusFilter 166
AudioClip 83
AudioDistortionFilter 170
AudioEchoFilter 168
AudioFilter 181
AudioHighPassFilter 165
AudioImporter 1020
AudioListener 81
AudioLowPassFilter 169
AudioManager 11
AudioReverbFilter 164
AudioReverbZone 167
AudioSource 82
Behaviour 8
BoxCollider 65
BuildSettings 141
Camera 20
CapsuleCollider 136
CGProgram 109
CharacterController 143
CharacterJoint 144
Cloth 183
ClothRenderer 161
Collider 56
Component 2
ConfigurableJoint 153
ConstantForce 75
Cubemap 89
DDSImporter 1046
DefaultAsset 1029
DefaultImporter 1030
DelayedCallManager 98
EditorBuildSettings 1045
EditorExtension 18
EditorExtensionImpl 1002
EditorSettings 159
EditorUserBuildSettings 1051
EllipsoidParticleEmitter 15
FBXImporter 1041
FixedJoint 138
Flare 121
FlareLayer 124
Font 128
GameManager 9
GameObject 1
GlobalGameManager 6
GUIDSerializer 1027
GUIElement 133
GUILayer 92
GUIText 132
GUITexture 131
Halo 122
HaloLayer 125
HaloManager 127
HierarchyState 1026
HingeJoint 59
InputManager 13
InspectorExpandedState 1048
InteractiveCloth 160
Joint 57
LensFlare 123
LevelGameManager 3
LibraryAssetImporter 1038
Light 108
LightmapSettings 157
LightProbeCloud 197
LightProbeGroup 220
LineRenderer 120
LODGroup 205
MasterServerInterface 155
Material 21
Mesh 43
Mesh3DSImporter 1005
MeshCollider 64
MeshFilter 33
MeshParticleEmitter 87
MeshRenderer 23
ModelImporter 1040
MonoAssemblyImporter 1050
MonoBehaviour 114
MonoImporter 1035
MonoManager 116
MonoScript 115
MovieImporter 1044
MovieTexture 152
NamedObject 130
NativeFormatImporter 1034
NavMesh 194
NavMeshAgent 195
NavMeshLayers 126
NavMeshSettings 196
NetworkManager 149
NetworkView 148
NotificationManager 52
OcclusionArea 192
OcclusionPortal 41
OffMeshLink 191
ParticleAnimator 12
ParticleEmitter 88
ParticleRenderer 26
ParticleSystem 198
ParticleSystemRenderer 199
PhysicMaterial 134
PhysicsManager 55
Pipeline 17
PlayerSettings 129
Prefab 1001
PreloadData 150
ProceduralMaterial 185
ProceduralTexture 186
Projector 119
PVRImporter 1052
QualitySettings 47
RaycastCollider 140
Renderer 25
RenderManager 30
RenderSettings 104
RenderTexture 84
ResourceManager 147
Rigidbody 54
Scene 29
ScriptMapper 94
Shader 48
ShaderImporter 1007
SkinnedCloth 163
SkinnedMeshRenderer 137
Skybox 45
SphereCollider 135
SpringJoint 145
SubstanceArchive 184
SubstanceImporter 1112
TagManager 78
TerrainCollider 154
TerrainData 156
TextAsset 49
TextMesh 102
TextScriptImporter 1031
Texture 27
Texture2D 28
Texture3D 117
TextureImporter 1006
TimeManager 5
TrailRenderer 96
Transform 4
Tree 193
TrueTypeFontImporter 1042
WebCamTexture 158
WheelCollider 146
WindZone 182
WorldParticleCollider 76
StreamingAssets
Most assets in Unity are combined into the project when it is built. However, it is sometimes useful to place files into the normal filesystem on the target machine to make them accessible via a pathname. An example of this is the deployment of a movie file on iOS devices; the original movie file must be available from a location in the filesystem to be played by the PlayMovie function.
Any files placed in a folder called StreamingAssets in a Unity project will be copied verbatim to a particular folder on the target machine. On a desktop computer (Mac OS or Windows) the location of the files can be obtained with the following code:-
path = = Application.dataPath + "/StreamingAssets";
On iOS, you should use:-
path = Application.dataPath + "/Raw";
...while on Android, you should use:-
path = "jar:file://" + Application.dataPath + "!/assets/";
Note that on Android, the files are contained within a compressed .jar file (which is essentially the same format as standard zip-compressed files). This means that if you do not use Unity's WWW class to retrieve the file then you will need to use additional software to see inside the .jar archive and obtain the file.
Page last updated: 2012-01-18Command Line Arguments
Typically, Unity will be launched by double-clicking its icon from the desktop but it is also possible to run it from the command line (ie, the MacOS Terminal or the Windows Command Prompt). When launched in this way, Unity can receive commands and information on startup, which can be very useful for test suites, automated builds and other production tasks.
Under MacOS, you can launch Unity from the Terminal by typing:-
/Applications/Unity/Unity.app/Contents/MacOS/Unity
...while under Windows, you should type
"C:\Program Files (x86)\Unity\Editor\Unity.exe"
...at the command prompt.
Standalone Unity games can be launched in a similar way.
Command Line Arguments
As mentioned above, the editor and also built games can optionally be supplied with additional commands and information on startup. This is done using the following command line arguments:-
- -batchmode
- Run Unity in batch mode. This should always be used in conjunction with the other command line arguments as it ensures no pop up windows appear and eliminates the need for any human intervention. When an exception occurs during execution of script code, asset server updates fail or other operations fail Unity will immediately exit with return code 1. Note that in batch mode, Unity will send a minimal version of its log output to the console. However, the Log Files still contain the full log information.
- -quit
- Quit the Unity editor after other commands have finished executing. Note that this can cause error messages to be hidden (but they will show up in the Editor.log file).
- -buildWindowsPlayer <pathname>
- Build a standalone Windows player (eg, -buildWindowsPlayer path/to/your/build.exe).
- -buildOSXPlayer <pathname>
- Build a standalone Mac OSX player (eg, -buildOSXPlayer path/to/your/build.app).
- -buildLinux32Player <pathname>
- Build a 32-bit standalone Linux player (eg, -buildLinux32Player path/to/your/build).
- -buildLinux64Player <pathname>
- Build a 64-bit standalone Linux player (eg, -buildLinux64Player path/to/your/build).
- -importPackage <pathname>
- Import the given package. No import dialog is shown.
- -createProject <pathname>
- Create an empty project at the given path.
- -projectPath <pathname>
- Open the project at the given path.
- -logFile <pathname>
- Specify where the Editor or Windows standalone log file will be written.
- -assetServerUpdate <IP[:port] projectName username password [r <revision>]>
- Force an update of the project in the Asset Server given by IP:port. The port is optional and if not given it is assumed to be the standard one (10733). It is advisable to use this command in conjunction with the -projectPath argument to ensure you are working with the correct project. If no project name is given then the last project opened by Unity is used. If no project exists at the path given by -projectPath then one is created automatically.
- -exportPackage <exportAssetPath1 exportAssetPath2 ExportAssetPath3 exportFileName>
- Exports a package given a path (or set of given paths). exportAssetPath is a folder (relative to to the Unity project root) to export from the Unity project and exportFileName is the package name. Currently, this option can only export whole folders at a time. This command normally needs to be used with the -projectPath argument.
- -nographics (Windows only)
- When running in batch mode, do not initialize graphics device at all. This makes it possible to run your automated workflows on machines that don't even have a GPU (automated workflows only work, when you have a window in focus, otherwise you can't send simulated input commands). A standalone player generated with this option will not feature any graphics.
- -executeMethod <ClassName.MethodName>
- Execute the static method as soon as Unity is started, the project is open and after the optional asset server update has been performed. This can be used to do continous integration, perform Unit Tests, make builds, prepare some data, etc. If you want to return an error from the commandline process you can either throw an exception which will cause Unity to exit with 1 or else call EditorApplication.Exit with a non-zero code. If you want to pass parameters you can add them to the command line and retrieve them inside the method using System.Environment.GetCommandLineArgs.
To use -executeMethod you need to have a script in an Editor folder and a static function in the class.
// C# example
using UnityEditor;
class MyEditorScript
{
static void PerformBuild ()
{
string[] scenes = { "Assets/MyScene.unity" };
BuildPipeline.BuildPlayer(scenes, ...);
}
}
// JavaScript example
static void PerformBuild ()
{
string[] scenes = { "Assets/MyScene.unity" };
BuildPipeline.BuildPlayer(scenes, ...);
}
Example usage
Execute Unity in batch mode, execute MyEditorScript.MyMethod method, and quit upon completion.
Windows:C:\program files\Unity\Editor>Unity.exe -quit -batchmode -executeMethod MyEditorScript.MyMethod
Mac OS:/Applications/Unity/Unity.app/Contents/MacOS/Unity -quit -batchmode -executeMethod MyEditorScript.MyMethod
Execute Unity in batch mode. Use the project path given and update from the asset server. Execute the given method after all assets have been downloaded and imported from the asset server. After the method has finished execution, automatically quit Unity.
/Applications/Unity/Unity.app/Contents/MacOS/Unity -batchmode -projectPath ~/UnityProjects/AutobuildProject -assetServerUpdate 192.168.1.1 MyGame AutobuildUser l33tpa33 -executeMethod MyEditorScript.PerformBuild -quit
Unity Standalone Player command line arguments
Standalone players built with Unity also understand some command line arguments:
- -batchmode
- Run the game in "headless" mode. The game will not display anything or accept user input. This is mostly useful for running servers for networked games.
- -force-opengl (Windows only)
- Make the game use OpenGL for rendering, even if Direct3D is available. Normally Direct3D is used but OpenGL is used if Direct3D 9.0c is not available.
- -force-d3d9 (Windows only)
- Make the game use Direct3D 9 for rendering. This is the default, so normally there's no reason to pass it.
- -force-d3d11 (Windows only)
- Make the game use Direct3D 11 for rendering.
- -single-instance (Linux & Windows only)
- Allow only one instance of the game to run at the time. If another instance is already running then launching it again with
-single-instancewill just focus the existing one. - -nolog (Windows only)
- Do not produce output log. Normally
output_log.txtis written in the*_Datafolder next to the game executable, where Debug.Log output is printed. - -force-d3d9-ref (Windows only)
- Make the game run using Direct3D's "Reference" software renderer. The DirectX SDK has to be installed for this to work. This is mostly useful for building automated test suites, where you want to ensure rendering is exactly the same no matter what graphics card is being used.
- -adapter N (Windows only)
- Allows the game to run full-screen on another display, where N denotes the display number.
- -popupwindow (Windows only)
- The window will be created as a a pop-up window (without a frame).
- -screen-width (Linux & Windows only)
- Overrides the default screen width. This must be an integer from a supported resolution.
- -screen-height (Linux & Windows only)
- Overrides the default screen height. This must be an integer from a supported resolution.
- -screen-quality (Linux only)
- Overrides the default screen quality. Example usage would be:
/path/to/myGame -screen-quality Beautiful
Editor Installer
The following options can be used when installing the Unity Editor from command line:
- /S (Windows only)
- Performs a silent (no questions asked) install.
- /D=PATH (Windows only)
- Sets the default install directory. Useful when combined with the silent install option.
Example usage
Install Unity silently to E:\Development\Unity.
Windows:UnitySetup.exe /S /D=E:\Development\Unity
RunningEditorCodeOnLaunch
Sometimes, it is useful to be able to run some editor script code in a project as soon as Unity launches without requiring action from the user. You can do this by applying the InitializeOnLoad attribute to a class which has a static constructor. A static constructor is a function with the same name as the class, declared static and without a return type or parameters (see here for more information):-
using UnityEngine;
using UnityEditor;
[InitializeOnLoad]
public class Startup {
static Startup()
{
Debug.Log("Up and running");
}
}
A static constructor is always guaranteed to be called before any static function or instance of the class is used, but the InitializeOnLoad attribute ensures that it is called as the editor launches.
An example of how this technique can be used is in setting up a regular callback in the editor (its "frame update", as it were). The EditorApplication class has a delegate called update which is called many times a second while the editor is running. To have this delegate enabled as the project launches, you could use code like the following:-
using UnityEditor;
using UnityEngine;
[InitializeOnLoad]
class MyClass
{
static MyClass ()
{
EditorApplication.update += Update;
}
static void Update ()
{
Debug.Log("Updating");
}
}
Page last updated: 2011-09-01
NetworkEmulation
Unity のネットワーキング機能の一部として、より低速のインターネット接続をエミュレートして、低帯域幅エリアのユーザーのためにゲーム体験を試験できます。
ネットワーク エミュレーションを有効にするには、 に移動し、必要な接続速度エミュレーションを選択します。

「ネットワーク エミュレーションの有効化」
技術的詳細
ネットワーク エミュレーションは、Network および NetworkView クラスに対するネットワーキング トラフィックでのパケットの送信を表示します。 エミュレートされた接続速度が下がると共に、Ping はすべてのオプションに対して人為的にインフレートされ、インフレーション値が増加します。 設定で、最悪の接続をシミュレートするため、パケットの損失や矛盾が導入されます。 エミュレーションは、サーバーまたはクライアントのいずれの役割に関係なく、持続します。
ネットワーク エミュレーションは、Network および NetworkView クラスにのみ影響し、.NET ソケットを使用して記述された特別なネットワーキング コードを変更またはエミュレートしません。
Page last updated: 2012-11-09Security Sandbox

Desktop
In Unity 3.0, the webplayer implements a security model very similar to the one used by the Adobe Flash player. This security restrictions apply only to the webplayer, and to the editor when the active build target is WebPlayer. The security model has several parts:
- Restrictions on accessing data on a domain other than the one hosting your .unity3d file.
- Some limitation on the usage of the Sockets.
- Disallowing invocation of any method we deemed off limits. (things like File.Delete, etc).
- Disallowing the usage of System.Reflection.* to call private/internal methods in classes you did not write yourself.
Currently only the first two parts of the security model are emulated in the Editor. Look here for a detailed list of which methods / classes are available in the webplayer.
The builtin mutiplayer networking functionality of Unity (UnityEngine.Network, UnityEngine.NetworkView classes etc) is not affected.
This document describes how to make sure your content keeps working with version 3.0 of the Unity webplayer.
- See the Unity API reference for information about the WWW class.
- See the .NET API reference for information about the .NET Socket class.
The WWW class and sockets use the same policy schema but besides that they are completely separate systems. The WWW policy only defines permissions on the web service where the policy is hosted but socket policies apply to all TCP/UDP socket connections.
The Unity editor comes with an "Emulate Web Security" feature, that imposes the webplayer's security model. This makes it easy to detect problems from the comfort of the editor. You can find this setting in Edit->Project Settings->Editor.
Implications for usage of the WWW class
The Unity webplayer expects a http served policy file named "crossdomain.xml" to be available on the domain you want to access with the WWW class, (although this is not needed if it is the same domain that is hosting the unity3d file).
For example, imagine a tetris game, hosted at the following url:
http://gamecompany.com/games/tetris.unity3d
needs to access a highscore list from the following url:
http://highscoreprovider.net/gethighscore.php
In this case, you would need to place a crossdomain.xml file at the root of the highscoreprovider.net domain like this: http://highscoreprovider.net/crossdomain.xml
The contents of the crossdomain.xml file are in the format used by the Flash player. It is very likely that you'll find the crossdomain.xml file already in place. The policy in the file look like this:
<?xml version="1.0"?> <cross-domain-policy> <allow-access-from domain="*"/> </cross-domain-policy>
When this file is placed at http://highscoreprovider.net/crossdomain.xml, the owner of that domain declares that the contents of the webserver may be accessed by any webplayer coming from any domain.
The Unity webplayer does not support the <allow-http-request-headers-from domain> and <site-control permitted-cross-domain-policies> tags. Note that crossdomain.xml should be an ASCII file.
Implications for usage of Sockets:
A Unity webplayer needs a socket served policy in order to connect to a particular host. This policy is by default hosted by the target host on port 843 but it can be hosted on other ports as well. The functional difference with a non-default port is that it must be manually fetched with Security.PrefetchSocketPolicy() API call and if it is hosted on a port higher than 1024 the policy can only give access to other ports higher than 1024.
When using the default port it works like this: A Unity webplayer tries to make a TCP socket connection to a host, it first checks that the host server will accept the connection. It does this by opening a TCP socket on port 843, issues a request, and expects to receive a socket policy over the new connection. The Unity webplayer then checks that the host's policy permits the connection to go ahead and it will proceed without error if so. This process happens transparently to the user's code, which does not need to be modified to use this security model. An example of a socket policy look like this:
<?xml version="1.0"?> <cross-domain-policy> <allow-access-from domain="*" to-ports="1200-1220"/> </cross-domain-policy>"
This policy effectively says "Content from any domain is free to make socket connections at ports 1200-1220". The Unity webplayer will respect this, and reject any attempted socket connection using a port outside that range (a SecurityException will be thrown).
When using UDP connections the policy can also be auto fetched when they need to be enforced in a similar manner as with TCP. The difference is that auto fetching with TCP happens when you Connect to something (ensures you are allowed to connect to a server), but with UDP, since it's connectionless, it also happens when you call any API point which sends or receives data (ensures you are allowed to send/receive traffic to/from a server).
The format used for the socket policy is the same as that used by the Flash player except some tags are not supported. The Unity webplayer only supports "*" as a valid value for the domain setting and the "to-ports" setting is mandatory.
<?xml version="1.0" encoding="ISO-8859-1"?> <!ELEMENT cross-domain-policy (allow-access-from*)> <!ELEMENT allow-access-from EMPTY> <!ATTLIST allow-access-from domain CDATA #REQUIRED> <!ATTLIST allow-access-from to-ports CDATA #REQUIRED>
The socket policy applies to both TCP and UDP connection types so both UDP and TCP traffic can be controlled by one policy server.
For your convenience, we provide a small program which simply listens at port 843; when on a connection it receives a request string, it will reply with a valid socket policy.
The server code can be found inside the Unity install folder, in Data/Tools/SocketPolicyServer on Windows or /Unity.app/Contents/Tools/SocketPolicyServer on OS X. Note that the pre-built executable can be run on Mac since it is a Mono executable. Just type "mono sockpol.exe" to run it. Note that this example code shows the correct behaviour of a socket policy server. Specifically the server expects to receive a zero-terminated string that contains <policy-file-request/>. It only sends to the client the socket policy xml document when this string (and exactly this string) has been received. Further, it is required that the xml header and xml body are sent with a single socket write. Breaking the header and body into separate socket write operations can cause security exceptions due to Unity receiving an incomplete policy. If you experience any problems with your own server please consider using the example that we provide. This should help you diagnose whether you have server or network issues.
Third party networking libraries, commonly used for multiplayer game networking, should be able to work with these requirements as long as they do not depend on peer 2 peer functionality (see below) but utilize dedicated servers. These sometimes even come out of the box with support for hosting policies.
Note: Whilst the crossdomain.xml and socket policy files are both xml documents and are broadly similar, the way that these documents are served are very different. Crossdomain.xml (which applied to http requests) is fetched using http on port 80, where-as the socket policy is fetched from port 843 using a trivial server that implements the <policy-file-request/>. You cannot use an http server to issue the socket policy file, nor set up a server that simply sends the socket policy file in response to a socket connection on port 843. Note also that each server you connect to requires its own socket policy server.
Debugging
You can use telnet to connect to the socket policy server. An example session is shown below:
host$ telnet localhost 843
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
<policy-file-request/>
<?xml version='1.0'?>
<cross-domain-policy>
<allow-access-from domain="*" to-ports="*" />
</cross-domain-policy>Connection closed by foreign host.
host$
In this example session, telnet is used to connect to the localhost on port 843. Telnet responds with the first three lines, and then sits waiting for the user to enter something. The user has entered the policy request string <policy-file-request/>, which the socket policy server receives and responds with the socket policy. The server then disconnects causing telnet to report that the connection has been closed.
Listening sockets
You cannot create listening sockets in the webplayer, it cannot act as a server. Therefore webplayers cannot communicate with each other directly (peer 2 peer). When using TCP sockets you can only connect to remote endpoints provided it is allowed through the socket policy system. For UDP it works the same but the concept is a little bit different as it is a connectionless protocol, you don't have to connect/listen to send/receive packets. It works by enforcing that you can only receive packets from a server if he has responded first with a valid policy with the allow-access-from domain tag.
This is all just so annoying, why does all this stuff exist?
The socket and WWW security features exist to protect people who install the Unity Web Player. Without these restrictions, an attack such as the following would be possible:
- Bob works at the white house.
- Frank is evil. He writes a unity webgame that pretends to be a game, but in the background does a WWW request to http://internal.whitehouse.gov/LocationOfNuclearBombs.pdf. internal.whitehouse.gov is a server that is not reachable from the internet, but is reachable from Bob's workstation because he works at the white house.
- Frank sends those pdf bytes to http://frank.com/secretDataUploader.php
- Frank places this game on http://www.frank.com/coolgame.unity3d
- Frank somehow convinces Bob to play the game.
- Bob plays the game.
- Game silently downloads the secret document, and sends it to Frank.
With the WWW and socket security features, this attack will fail, because before downloading the pdf, unity checks http://internal.whitehouse.gov/crossdomain.xml, with the intent to ask that server: "is the data you have on your server available for public usage?". Placing a crossdomain.xml on a webserver can be seen as the response to that question. In the case of this example, the system operator of internal.whitehouse.gov will not place a crossdomain.xml on its server, which will lead Unity to not download the pdf.
Unfortunately, in order to protect the people who install the Unity Web Player, people who develop in Unity need to take these security measures into account when developing content. The same restrictions are present in all major plugin technologies. (Flash, Silverlight, Shockwave)
Exceptions
In order to find the right balance between protecting Web Player users and making life of content developers easy, we have implemented an exception to the security mechanism described above:
- You are allowed to download images from servers that do not have a crossdomain.xml file. However, the only thing you are allowed to do with these images is use them as textures in your scene. You are not allowed to use GetPixel() on them. You are also no longer allowed to read back from the screen. Both attempts will result in a SecurityException being thrown. The reasoning is here is that it's okay to download the image, as long as the content developer gets no access to it. So you can display it to the user, but you cannot send the bytes of the image back to some other server.
VisualStudioIntegration
この機能で何が得られますか?
より高度な C# 開発環境。
スマートなオートコンプリーション、ソース ファイルへのコンピュータ支援変更、スマートな構文ハイライトなどを考えてください。
Express と Pro の差は何ですか?
VisualStudio C# 2010 は、Microsoft 社の製品です。 Express と Profesional エディションがあります。
Express エディションは無料で、 http://www.microsoft.com/express/vcsharp/ からダウンロードできます。
Professional エディションは無料はありませんが、詳細を http://www.microsoft.com/visualstudio/en-us/products/professional/default.mspx から得られます。
Unity の VisualStudio 統合には、次の 2 つのコンポーネントがあります。
1) Unity での VisualStudio プロジェクト ファイルの作成と維持。 Express と Profesional の両方。
2) スクリプトをダブルクリック時または Unity でのエラー発生時に Unity は自動的に VisualStudio を開く。 Professional のみ。
Visual Studio Express を入手しました。どう使えばよいでしょうか?
- Unity でメニューから を選択します。
- Unity プロジェクトで、新規作成された .sln ファイルを見つけます (Assets フォルダからのフォルダ)。
- Visual Studio Express でそのファイルを開きます。
- すべてのスクリプト ファイルを編集できるので、Unity に戻って使用します。
Visual Studio Professional を入手しました。どう使えばよいでしょうか?
- Unity で、Edit->Preferences に移動し、Visual Studio が希望の外部エディタとして選択されているかを確認します。
- プロジェクトで、C# ファイルをダブルクリックします。 Visual Studio が自動的にそのファイルを開きます。
- そのファイルを編集できるので、Unity に戻ります。
以下の点に気をつけてください。
- Visual Studio に自身の C# コンパイラがあっても、C# スクリプトにエラーがあるかチェックでき、Unity は自身の C# コンパイラを使用してスクリプトをコンパイルします。 それでも、エラーがあるかをチェックするのに常に Unity に戻る必要がないため、Visual Studio コンパイラを使用するのは非常に便利です。
- Visual Studio の C# コンパイラには、Unity の C# コンパイラが現在提供しているよりも多くの機能が搭載されています。 つまり、一部のコード (特に新しい C# 機能) は Visual Studio ではエラーを与えませんが、Unity では与えます。
- Unity は自動的に Visual Studio .sln および .csproj ファイルを作成および維持します。 誰が Unity 内でファイルを追加、名前変更、移動、削除を行なっても、Unity は .sln および .csproj ファイルを生成します。 Visual Studio からのソリューションにもファイルを追加できます。 Unity はこれらの新しいファイルをインポートし、次に Unity がプロジェクト ファイルを再度作成すると、この新しいファイルを含めて、プロジェクト ファイルを作成します。
- Unity は、AssetServer アップデートまたは SVN アップデート後には、Visual Studio プロジェクト ファイルを生成しません。 メニューを通じて、Unity に Visual Studio プロジェクト ファイルを生成するよう手動で指示できます。
ExternalVersionControlSystemSupport
Unity offers an Asset Server add-on product for easy integrated versioning of your projects. If you for some reason are not able use the Unity Asset Server, it is possible to store your project in any other version control system, such as Subversion, Perforce or Bazaar. This requires some initial manual setup of your project.
Before checking your project in, you have to tell Unity to modify the project structure slightly to make it compatible with storing assets in an external version control system. This is done by selecting in the application menu and enabling External Version Control support by selecting Metafiles in the dropdown for Version Control. This will create a text file for every asset in the Assets directory containing the necessary bookkeeping information required by Unity. The files will have a .meta file extension with the first part being the full file name of the asset it is associated with. Moving and renaming assets within Unity should also update the relevant .meta files. However, if you move or rename assets from an external tool, make sure to syncronize the relevant .meta files as well.
When checking the project into a version control system, you should add the Assets and the ProjectSettings directories to the system. The Library directory should be completely ignored - when using external version control, it's only a local cache of imported assets.
When creating new assets, make sure both the asset itself and the associated .meta file is added to version control.
Example: Creating a new project and importing it to a Subversion repository.
First, let's assume that we have a subversion repository at svn://my.svn.server.com/ and want to create a project at svn://my.svn.server.com/MyUnityProject.
Then follow these steps to create the initial import in the system:
- Create a new project inside Unity and lets call it
InitialUnityProject. You can add any initial assets here or add them later on. - Enable Meta files in
- Quit Unity (We do this to assure that all the files are saved).
- Delete the
Librarydirectory inside your project directory. - Import the project directory into Subversion. If you are using the command line client, this is done like this from the directory where your initial project is located:
svn import -m"Initial project import" InitialUnityProject svn://my.svn.server.com/MyUnityProject
If successful, the project should now be imported into subversion and you can delete theInitialUnityProjectdirectory if you wish. - Check out the project back from subversion
svn co svn://my.svn.server.com/MyUnityProject
And check that theAssetsandProjectSettingsdirectory are versioned. - Open the checked out project with Unity by launching it while holding down the Option or the left Alt key. Opening the project will recreate the
Librarydirectory in step 4 above. - Optional: Set up an ignore filter for the unversioned
Librarydirectory:svn propedit svn:ignore MyUnityProject/
Subversion will open a text editor. Add the Library directory. - Finally commit the changes. The project should now be set up and ready:
svn ci -m"Finishing project import" MyUnityProject
Analytics
The Unity editor is configured to send anonymous usage data back to Unity. This information is used to help improve the features of the editor. The analytics are collected using Google Analytics. Unity makes calls to a URI hosted by Google. The URN part of the URI contains details that describe what editor features or events have been used.
Examples of collected data
The following are examples of data that Unity might collect.
Which menu items have been used. If some menu items are used rarely or not at all we could in the future simplify the menuing system.
Build times. By collecting how long builds take to make we can focus engineering effort on optimizing the correct code.
Lightmap baking. Again, timing and reporting how long it takes for light maps to bake can help us decide how much effort to spend on optimizing this area.
Disabling Analytics
If you do not want to send anonymous data to Unity then the sending of Analytics can be disabled. To do this untick the box in the Unity Preferences General tab.

Editor analytics in the preferences pane.
Version Check
Unity は、アップデートが使用可能かチェックします。 これは Unity 起動時か、メニュー項目の Help->Check for Updates 選択時に行われます。 このアップデート チェックは、現在の Unity の改訂番号 (About Unity ダイアログでバージョン名の後に括弧内に表示される 5 桁の番号) をアップデート サーバーに送信し、そこで、最新のリリースバージョンと比較されます。 Unity の新しいバージョンが利用できる場合、以下のダイアログが表示されます。

Unity の新しいバージョンがダウンロードできる場合に表示されるウィンドウ。
使用中のバージョンが最新の場合、以下のダイアログが表示されます。

Unity が最新のバージョンに更新される際に表示されるウィンドウ。
Download new version ボタンをクリックすると、新しいバージョンをダウンロードできるウェブサイトに移動します。
アップデート チェックの頻度
サーバーからの応答には、いつ次のアップデート チェックを行うかを示す間隔も含まれます。 これにより、Unity がアップデートの利用を期待していない場合に、アップデート チェックの頻度を下げることができます。
バージョン アップデートのスキップ
プロジェクトの最中で、Unity の新しいバージョンにアップデートしたくない場合があるでしょう。 Unity エディタ アップデート チェック ダイアログで、Skip this version ボタンを押すと、Unity はこのアップデートに関して何も伝えなくなります。
アップデート チェックの無効化
アップデートのチェックを無効にはできません。 ダイアログの Check For Updates チェックボックスは、Unity 起動時に (利用できる場合) アップデートの通知を行うかをコントロールします。 Check for Updates オプションのチェックを外していても、メニュー項目の Help->Check for Updates メニュー項目を使用することで、アップデートのチェックができます。
Page last updated: 2012-11-09Installing Multiple Versions of Unity
You can install more than one version of Unity on your machine as long as you follow the correct naming conventions for your folders. You need to rename each of the Unity folders themselves, so that the hierarchy looks like:
Unity_3.4.0 ---Editor ---MonoDevelop Unity_4.0b7 ---Editor ---MonoDevelop
PC
- Install Unity 4.0 (www.unity3d.com/download)
- When you install on PC it will select the previously installed directory - do not install here
- Create a new directory named sensibly e.g. Unity_4
- Name any shortcuts so you know which version you are launching
- Hold alt when you launch the beta to force unity to let you choose which project to open (otherwise it will try and upgrade the last opened project)
- Choose your projectname_4 directory to open your backed up project
Do not rename each Editor folder inside a single Unity folder! You will overwrite the MonoDevelop folder and this will cause serious stability problems and unexpected crashes.
Mac
- Find your existing Unity application folder and rename appropriately e.g Unity35
- Install the Unity 4.0 (www.unity3d.com/download)
- Name any shortcuts so you know which version you are launching
- Hold alt when you launch the beta to force unity to let you choose which project to open (otherwise it will try and upgrade the last opened project)
- Choose your projectname_4 directory to open your backed up project
TroubleShooting
This section addresses common problems that can arise when using Unity. Each platform is dealt with separately below.

Desktop
In MonoDevelop, the Debug button is greyed out!
- This means that MonoDevelop was unable to find the Unity executable. In the MonoDevelop preferences, go to the Unity/Debugger section and then browse to where your Unity executable is located.
Is there a way to get rid of the welcome page in MonoDevelop?
- Yes. In the MonoDevelop preferences, go to the Visual Style section, and uncheck "Load welcome page on startup".
Geforce 7300GT on OSX 10.6.4
- Deferred rendering is disabled because materials are not displayed correctly for Geforce 7300GT on OX 10.6.4; This happens because of buggy video drivers.
On Windows x64, Unity crashes when my script throws a NullReferenceException
- Please apply Windows Hotfix #976038.
Graphics
Slow framerate and/or visual artifacts.
- This may occur if your video card drivers are not up to date. Make sure you have the latest official drivers from your card vendor.
Shadows
I see no shadows at all!
- Shadows are a Unity Pro only feature, so without Unity Pro you won't get shadows. Simpler shadow methods, like using a Projector, are still possible, of course.
- Shadows also require certain graphics hardware support. See Shadows page for details.
- Check if shadows are not completely disabled in Quality Settings.
- Shadows are currently not supported for Android and iOS mobile platforms.
Some of my objects do not cast or receive shadows
An object's Renderer must have Receive Shadows enabled for shadows to be rendered onto it. Also, an object must have Cast Shadows enabled in order to cast shadows on other objects (both are on by default).
Only opaque objects cast and receive shadows. This means that objects using the built-in Transparent or Particle shaders will not cast shadows. In most cases it is possible to use Transparent Cutout shaders for objects like fences, vegetation, etc. If you use custom written Shaders, they have to be pixel-lit and use the Geometry render queue. Objects using VertexLit shaders do not receive shadows but are able to cast them.
Only Pixel lights cast shadows. If you want to make sure that a light always casts shadows no matter how many other lights are in the scene, then you can set it to Force Pixel render mode (see the Light reference page).

iOS
Troubleshooting on iOS devices
There are some situations with iOS where your game can work perfectly in the Unity editor but then doesn't work or maybe doesn't even start on the actual device. The problems are often related to code or content quality. This section describes the most common scenarios.
The game stops responding after a while. Xcode shows "interrupted" in the status bar.
There are a number of reasons why this may happen. Typical causes include:
- Scripting errors such as using uninitialized variables, etc.
- Using 3rd party Thumb compiled native libraries. Such libraries trigger a known problem in the iOS SDK linker and might cause random crashes.
- Using generic types with value types as parameters (eg, List<int>, List<SomeStruct>, List<SomeEnum>, etc) for serializable script properties.
- Using reflection when managed code stripping is enabled.
- Errors in the native plugin interface (the managed code method signature does not match the native code function signature).
Information from the XCode Debugger console can often help detect these problems (Xcode menu: View > Debug Area > Activate Console).
The Xcode console shows "Program received signal: “SIGBUS” or EXC_BAD_ACCESS error.
This message typically appears on iOS devices when your application receives a NullReferenceException. There two ways to figure out where the fault happened:
Managed stack traces
Since version 3.4 Unity includes software-based handling of the NullReferenceException. The AOT compiler includes quick checks for null references each time a method or variable is accessed on an object. This feature affects script performance which is why it is enabled only for development builds (for basic license users it is enough to enable the "development build" option in the Build Settings dialog, while iOS pro license users additionally need to enable the "script debugging" option). If everything was done right and the fault actually is occurring in .NET code then you won't see EXC_BAD_ACCESS anymore. Instead, the .NET exception text will be printed in the Xcode console (or else your code will just handle it in a "catch" statement). Typical output might be:
Unhandled Exception: System.NullReferenceException: A null value was found where an object instance was required. at DayController+$handleTimeOfDay$121+$.MoveNext () [0x0035a] in DayController.js:122
This indicates that the fault happened in the handleTimeOfDay method of the DayController class, which works as a coroutine. Also if it is script code then you will generally be told the exact line number (eg, "DayController.js:122"). The offending line might be something like the following:
Instantiate(_imgwww.assetBundle.mainAsset);
This might happen if, say, the script accesses an asset bundle without first checking that it was downloaded correctly.
Native stack traces
Native stack traces are a much more powerful tool for fault investigation but using them requires some expertise. Also, you generally can't continue after these native (hardware memory access) faults happen. To get a native stack trace, type bt all into the Xcode Debugger Console. Carefully inspect the printed stack traces - they may contain hints about where the error occurred. You might see something like:
... Thread 1 (thread 11523): #0 0x006267d0 in m_OptionsMenu_Start () #1 0x002e4160 in wrapper_runtime_invoke_object_runtime_invoke_void__this___object_intptr_intptr_intptr () #2 0x00a1dd64 in mono_jit_runtime_invoke (method=0x18b63bc, obj=0x5d10cb0, params=0x0, exc=0x2fffdd34) at /Users/mantasp/work/unity/unity-mono/External/Mono/mono/mono/mini/mini.c:4487 #3 0x0088481c in MonoBehaviour::InvokeMethodOrCoroutineChecked () ...
First of all you should find the stack trace for "Thread 1", which is the main thread. The very first lines of the stack trace will point to the place where the error occurred. In this example, the trace indicates that the NullReferenceException happened inside the "OptionsMenu" script's "Start" method. Looking carefully at this method implementation would reveal the cause of the problem. Typically, NullReferenceExceptions happen inside the Start method when incorrect assumptions are made about initialization order. In some cases only a partial stack trace is seen on the Debugger Console:
Thread 1 (thread 11523): #0 0x0062564c in start ()
This indicates that native symbols were stripped during the Release build of the application. The full stack trace can be obtained with the following procedure:
- Remove application from device.
- Clean all targets.
- Build and run.
- Get stack traces again as described above.
EXC_BAD_ACCESS starts occurring when an external library is linked to the Unity iOS application.
This usually happens when an external library is compiled with the ARM Thumb instruction set. Currently such libraries are not compatible with Unity. The problem can be solved easily by recompiling the library without Thumb instructions. You can do this for the library's Xcode project with the following steps:
- in Xcode, select "View" > "Navigators" > "Show Project Navigator" from the menu
- select the "Unity-iPhone" project, activate "Build Settings" tab
- in the search field enter : "Other C Flags"
- add -mno-thumb flag there and rebuild the library.
If the library source is not available you should ask the supplier for a non-thumb version of the library.
The Xcode console shows "WARNING -> applicationDidReceiveMemoryWarning()" and the application crashes immediately afterwards
(Sometimes you might see a message like Program received signal: 0.) This warning message is often not fatal and merely indicates that iOS is low on memory and is asking applications to free up some memory. Typically, background processes like Mail will free some memory and your application can continue to run. However, if your application continues to use memory or ask for more, the OS will eventually start killing applications and yours could be one of them. Apple does not document what memory usage is safe, but empirical observations show that applications using less than 50% MB of all device RAM (like ~200-256 MB for 2nd generation ipad) do not have major memory usage problems. The main metric you should rely on is how much RAM your application uses. Your application memory usage consists of three major components:
- application code (the OS needs to load and keep your application code in RAM, but some of it might be discarded if really needed)
- native heap (used by the engine to store its state, your assets, etc. in RAM)
- managed heap (used by your Mono runtime to keep C# or JavaScript objects)
- GLES driver memory pools: textures, framebuffers, compiled shaders, etc.
Your application memory usage can be tracked by two Xcode Instruments tools: Activity Monitor, Object Allocations and VM Tracker. You can start from the Xcode Run menu: Product > Profile and then select specific tool. Activity Monitor tool shows all process statistics including Real memory which can be regarded as the total amount of RAM used by your application. Note: OS and device HW version combination might noticeably affect memory usage numbers, so you should be careful when comparing numbers obtained on different devices.
Note: The internal profiler shows only the heap allocated by .NET scripts. Total memory usage can be determined via Xcode Instruments as shown above. This figure includes parts of the application binary, some standard framework buffers, Unity engine internal state buffers, the .NET runtime heap (number printed by internal profiler), GLES driver heap and some other miscellaneous stuff.
The other tool displays all allocations made by your application and includes both native heap and managed heap statistics (don't forget to check the Created and still living box to get the current state of the application). The important statistic is the Net bytes value.

To keep memory usage low:
- Reduce the application binary size by using the strongest iOS stripping options (Advanced license feature), and avoid unnecessary dependencies on different .NET libraries. See the player settings and player size optimization manual pages for further details.
- Reduce the size of your content. Use PVRTC compression for textures and use low poly models. See the manual page about reducing file size for more information.
- Don't allocate more memory than necessary in your scripts. Track mono heap size and usage with the internal profiler
- Note: with Unity 3.0, the scene loading implementation has changed significantly and now all scene assets are preloaded. This results in fewer hiccups when instantiating game objects. If you need more fine-grained control of asset loading and unloading during gameplay, you should use Resources.Load and Object.Destroy.
Querying the OS about the amount of free memory may seem like a good idea to evaluate how well your application is performing. However, the free memory statistic is likely to be unreliable since the OS uses a lot of dynamic buffers and caches. The only reliable approach is to keep track of memory consumption for your application and use that as the main metric. Pay attention to how the graphs from the tools described above change over time, especially after loading new levels.
The game runs correctly when launched from Xcode but crashes while loading the first level when launched manually on the device.
There could be several reasons for this. You need to inspect the device logs to get more details. Connect the device to your Mac, launch Xcode and select Window > Organizer from the menu. Select your device in the Organizer's left toolbar, then click on the "Console" tab and review the latest messages carefully. Additionally, you may need to investigate crash reports. You can find out how to obtain crash reports here: http://developer.apple.com/iphone/library/technotes/tn2008/tn2151.html.
The Xcode Organizer console contains the message "killed by SpringBoard".
There is a poorly-documented time limit for an iOS application to render its first frames and process input. If your application exceeds this limit, it will be killed by SpringBoard. This may happen in an application with a first scene which is too large, for example. To avoid this problem, it is advisable to create a small initial scene which just displays a splash screen, waits a frame or two with yield and then starts loading the real scene. This can be done with code as simple as the following:
function Start () {
yield;
Application.LoadLevel("Test");
}
Type.GetProperty() / Type.GetValue() cause crashes on the device
Currently Type.GetProperty() and Type.GetValue() are supported only for the .NET 2.0 Subset profile. You can select the .NET API compatibility level in the Player Settings.
Note: Type.GetProperty() and Type.GetValue() might be incompatible with managed code stripping and might need to be excluded (you can supply a custom non-strippable type list during the stripping process to accomplish this). For further details, see the iOS player size optimization guide.
The game crashes with the error message "ExecutionEngineException: Attempting to JIT compile method 'SometType`1<SomeValueType>:.ctor ()' while running with --aot-only."
The Mono .NET implementation for iOS is based on AOT (ahead of time compilation to native code) technology, which has its limitations. It compiles only those generic type methods (where a value type is used as a generic parameter) which are explicitly used by other code. When such methods are used only via reflection or from native code (ie, the serialization system) then they get skipped during AOT compilation. The AOT compiler can be hinted to include code by adding a dummy method somewhere in the script code. This can refer to the missing methods and so get them compiled ahead of time.
void _unusedMethod()
{
var tmp = new SomeType<SomeValueType>();
}
Note: value types are basic types, enums and structs.
Various crashes occur on the device when a combination of System.Security.Cryptography and managed code stripping is used
.NET Cryptography services rely heavily on reflection and so are not compatible with managed code stripping since this involves static code analysis. Sometimes the easiest solution to the crashes is to exclude the whole System.Security.Crypography namespace from the stripping process.
The stripping process can be customized by adding a custom link.xml file to the Assets folder of your Unity project. This specifies which types and namespaces should be excluded from stripping. Further details can be found in the iOS player size optimization guide.
link.xml
<linker>
<assembly fullname="mscorlib">
<namespace fullname="System.Security.Cryptography" preserve="all"/>
</assembly>
</linker>
Application crashes when using System.Security.Cryptography.MD5 with managed code stripping
You might consider advice listed above or can work around this problem by adding extra reference to specific class to your script code:
object obj = new MD5CryptoServiceProvider();
"Ran out of trampolines of type 1/2" runtime error
This error usually happens if you use lots of recursive generics. You can hint to the AOT compiler to allocate more trampolines of type 1 or type 2. Additional AOT compiler command line options can be specified in the "Other Settings" section of the Player Settings. For type 1 trampolines, specify nrgctx-trampolines=ABCD, where ABCD is the number of new trampolines required (i.e. 4096). For type 2 trampolines specify nimt-trampolines=ABCD.
After upgrading Xcode Unity iOS runtime fails with message "You are using Unity iPhone Basic. You are not allowed to remove the Unity splash screen from your game"
With some latest Xcode releases there were changes introduced in PNG compression and optimization tool. These changes might cause false positives in Unity iOS runtime checks for splash screen modifications. If you encounter such problems try upgrading Unity to the latest publicly available version. If it does not help you might consider following workaround:
- Replace your Xcode project from scratch when building from Unity (instead of appending it)
- Delete already installed project from device
- Clean project in Xcode (Product->Clean)
- Clear Xcode's Derived Data folders (Xcode->Preferences->Locations)
If this still does not help try disabling PNG re-compression in Xcode:
- Open your Xcode project
- Select "Unity-iPhone" project there
- Select "Build Settings" tab there
- Look for "Compress PNG files" option and set it to NO
App Store submission fails with "iPhone/iPod Touch: application executable is missing a required architecture. At least one of the following architecture(s) must be present: armv6" message
You might get such message when updating already existing application, which previously was submitted with armv6 support. Unity 4.x and Xcode 4.5 does not support armv6 platform anymore. To solve submission problem just set Target OS Version in Unity Player Settings to 4.3 or higher.
WWW downloads are working fine in Unity Editor and on Android, but not on iOS
Most common mistake is to assume that WWW downloads are always happening on separate thread. On some platforms this might be true, but you should not take it for granted. Best way to track WWW status is either to use yield statement or check status in Update method. You should not use busy while loops for that.
"PlayerLoop called recursively!" error occurs when using Cocoa via a native function called from a script
Some operations with the UI will result in iOS redrawing the window immediately (the most common example is adding a UIView with a UIViewController to the main UIWindow). If you call a native function from a script, it will happen inside Unity's PlayerLoop, resulting in PlayerLoop being called recursively. In such cases, you should consider using the performSelectorOnMainThread method with waitUntilDone set to false. It will inform iOS to schedule the operation to run between Unity's PlayerLoop calls.
Profiler or Debugger unable to see game running on iOS device
- Check that you have built a Development build, and ticked the "Enable Script Debugging" and "Autoconnect profiler" boxes (as appropriate).
- The application running on the device will make a multicast broadcast to 225.0.0.222 on UDP port 54997. Check that your network settings allow this traffic. Then, the profiler will make a connection to the remote device on a port in the range 55000 - 55511 to fetch profiler data from the device. These ports will need to be open for UDP access.
Missing DLLs
If your application runs ok in editor but you get errors in your iOS project this may be caused by missing DLLs (e.g. I18N.dll, I19N.West.dll). In this case, try copying those dlls from within the Unity.app to your project's Assets/Plugins folder. The location of the DLLs within the unity app is:
Unity.app/Contents/Frameworks/Mono/lib/mono/unity
You should then also check the stripping level of your project to ensure the classes in the DLLs aren't being removed when the build is optimised. Refer to the iOS Optimisation Page for more information on iOS Stripping Levels.
Xcode Debugger console reports: ExecutionEngineException: Attempting to JIT compile method '(wrapper native-to-managed) Test:TestFunc (int)' while running with --aot-only
Typically such message is received when managed function delegate is passed to the native function, but required wrapper code wasn't generated when building application. You can help AOT compiler by hinting which methods will be passed as delegates to the native code. This can be done by adding "MonoPInvokeCallbackAttribute" custom attribute. Currently only static methods can be passed as delegates to the native code.
Sample code:
using UnityEngine;
using System.Collections;
using System;
using System.Runtime.InteropServices;
using AOT;
public class NewBehaviourScript : MonoBehaviour {
[DllImport ("__Internal")]
private static extern void DoSomething (NoParamDelegate del1, StringParamDelegate del2);
delegate void NoParamDelegate ();
delegate void StringParamDelegate (string str);
[MonoPInvokeCallback (typeof (NoParamDelegate))]
public static void NoParamCallback()
{
Debug.Log ("Hello from NoParamCallback");
}
[MonoPInvokeCallback (typeof (StringParamDelegate))]
public static void StringParamCallback(string str)
{
Debug.Log (string.Format ("Hello from StringParamCallback {0}", str));
}
// Use this for initialization
void Start () {
DoSomething(NoParamCallback, StringParamCallback);
}
}

Android
Troubleshooting Android development
Unity fails to install your application to your device
- Verify that your computer can actually see and communicate with the device. See the Publishing Builds page for further details.
- Check the error message in the Unity console. This will often help diagnose the problem.
If you get an error saying "Unable to install APK, protocol failure" during a build then this indicates that the device is connected to a low-power USB port (perhaps a port on a keyboard or other peripheral). If this happens, try connecting the device to a USB port on the computer itself.
Your application crashes immediately after launch.
- Ensure that you are not trying to use NativeActivity with devices that do not support it.
- Try removing any native plugins you have.
- Try disabling stripping.
- Use adb logcat to get the crash report from your device.
Building DEX Failed
This an error which will produce a message like the following:-
Building DEX Failed! G:\Unity\JavaPluginSample\Temp/StagingArea> java -Xmx1024M -Djava.ext.dirs="G:/AndroidSDK/android-sdk_r09-windows\platform-tools/lib/" -jar "G:/AndroidSDK/android-sdk_r09-windows\platform-tools/lib/dx.jar" --dex --verbose --output=bin/classes.dex bin/classes.jar plugins Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine.
This is usually caused by having the wrong version of Java installed on your machine. Updating your Java installation to the latest version will generally solve this issue.
The game crashes after a couple of seconds when playing video
Make sure Settings->Developer Options->Don't keep activities isn't enabled on the phone. The video player is its own activity and therefore the regular game activity will be destroyed if the video player is activated.
My game quits when I press the sleep button
Change the <activity> tag in the AndroidManifest.xml to contain <android:configChanges> tag as described here.
An example activity tag might look something like this:-
<activity android:name=".AdMobTestActivity"
android:label="@string/app_name"
android:configChanges="fontScale|keyboard|keyboardHidden|locale|mnc|mcc|navigation|orientation|screenLayout|screenSize|smallestScreenSize|uiMode|touchscreen">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
Shadows
Unity Pro makes it possible to use real-time shadows on any light. Objects can cast shadows onto each other and onto parts of themselves ("self shadowing"). All types of Lights - Directional, Spot and Point - support shadows.
Using shadows can be as simple as choosing or on a Light. However, if you want optimal shadow quality and performance, there are some additional things to consider.
The Shadow Troubleshooting page contains solutions to common shadowing problems.
Curiously enough, the best shadows are non-realtime ones! Whenever your game level geometry and lighting is static, just precompute lightmaps in your 3D application. Computing shadows offline will always result in better quality and performance than displaying them in real time. Now onto the realtime ones...
Tweaking shadow quality
Unity uses so called shadow maps to display shadows. Shadow mapping is a texture based approach, it's easiest to think of it as "shadow textures" projecting out from lights onto the scene. Thus much like regular texturing, quality of shadow mapping mostly depends on two factors:
- The resolution (size) of the shadow maps. The larger the shadow maps, the better the shadow quality.
- The filtering of the shadows. Hard shadows take the nearest shadow map pixel. Soft shadows average several shadow map pixels, resulting in smoother looking shadows (but soft shadows are more expensive to render).
Different Light types use different algorithms to calculate shadows.
- For Directional lights, the crucial settings for shadow quality are and , found in Quality Settings. is also taken into account, but the first thing to try to improve directional shadow quality is reducing shadow distance. All the details about directional light shadows can be found here: Directional Shadow Details.
- For Spot and Point lights, determines shadow map size. Additionally, for lights that cover small area on the screen, smaller shadow map resolutions are used.
Details on how shadow map sizes are computed are in Shadow Size Details page.
Shadow performance
Realtime shadows are quite performance hungry, so use them sparingly. For each light to render its shadows, first any potential shadow casters must be rendered into the shadow map, then all shadow receivers are rendered with the shadow map. This makes shadow casting lights even more expensive than Pixel lights, but hey, computers are getting faster as well!
Soft shadows are more expensive to render than Hard shadows. The cost is entirely on the graphics card though (it's only longer shaders), so Hard vs. Soft shadows don't make any impact on the CPU or memory.
Quality Settings contains a setting called - this is how far from the camera shadows are drawn. Often it makes no sense to calculate and display shadows that are 500 meters away from the camera, so use as low shadow distance as possible for your game. This will help performance (and will improve quality of directional light shadows, see above).
Hardware support for shadows
Built-in shadows require a fragment program (pixel shader 2.0) capable graphics card. This is the list of supported cards:
- On Windows:
- ATI Radeon 9500 and up, Radeon X series, Radeon HD series.
- NVIDIA GeForce 6xxx, 7xxx, 8xxx, 9xxx, GeForce GT, GTX series.
- Intel GMA X3000 (965) and up.
- On Mac OS X:
- Mac OS X 10.4.11 or later.
- ATI Radeon 9500 and up, Radeon X, Radeon HD series.
- NVIDIA GeForce FX, 6xxx, 7xxx, 8xxx, 9xxx, GT, GTX series.
- Intel GMA 950 and later.
- Soft shadows are disabled because of driver bugs (hard shadows will be used instead).
- Mobile (iOS & Android):
- OpenGL ES 2.0
- GL_OES_depth_texture support. Most notably, Tegra-based Android devices do not have it, so shadows are not supported there.
Notes
- Forward rendering path supports only one directional shadow casting light. Vertex Lit rendering path does not support realtime shadows.
- Vertex-lit lights don't have shadows.
- Vertex-lit materials won't receive shadows (but do cast shadows).
- Transparent objects don't cast or receive shadows. Transparent Cutout objects do cast and receive shadows.
DirectionalShadowDetails
このページではDirectional ライトによる shadows について詳しく説明します。
モバイルプラットフォームにおける注意: Directionalライトによるリアルタイムシャドウはいつも1つのシャドウカスケードを使います。hard shadows限定です。
Directional ライトは多くの場合、屋外のゲームで主光線 (日光や月光) として使用されます。 特に一人称視点および三人称視点のゲームで、表示距離が大きい場合があり、シャドウを微調整して、状況に対して、画質とパフォーマンスの最適なバランスを得る必要がある場合が多くあります。
三人称視点のゲームで見た目のよい影の作成から始めましょう。

ここの影はかなり見た目がいいです!
ここでは、表示距離は、およそ 50 ゲーム単位であるため、Shadow Distanceが Quality Settings で 50 に設定しています。 また、Shadow Cascadesを 4、Shadow Resolutionを High、ライトとして、Soft Shadowsを使用しています。
以降の章では、Directional ライトのシャドウの各面について詳しく説明します。
Hard versus Soft shadows
U同じライト設定を使用して、Shadow TypeからHard Shadowsに切り替える場合、明るいエリアから影のエリアへの移行はHardで、100% 影、または100% 明るいかのいずれかになります。 Hard shadows は、高速でレンダリングされますが、見た目が現実的でなくなります。

距離が 50 で、4 つのカスケードのある Hard shadows。
Shadow Cascade count
Directional ライトの場合、Unity はいわゆる Cascaded Shadow Maps (別名Parallel Split Shadow Maps)を使用できます。これは、特に長い表示距離の場合に、非常に質が優れた影を提供します。 カスケード シャドウは、表示エリアを徐々に大きい部分に分割し、それぞれで同じサイズのシャドウを使用することで、機能します。 この結果、ビューワに近いオブジェクトは、遠くにあるオブジェクトよりもシャドウ マップ ピクセルが多くなります。
下の画像では、シャドウ ピクセルがよりよく見えるようになるため、Hard shadows を使用します。
カスケード シャドウ マップを使用していないため、シャドウ ディスタンス全体 (この場合は、まだ 50 単位) がシャドウ テクスチャによって均一に覆われる必要があります。 ハード シャドウは、カスケードなしで次のように見えます。

距離が 50 で、カスケードなしのハード シャドウ。
シャドウ テクスチャのピクセルは、どこでの同じサイズで、遠くにあると見た目はよいですが、近くに来るに連れ、画質は下がります。 シャドウ マップは、表示エリア全体を覆い、視覚化すると、次のように見えます。

カスケードがないと、シャドウ テクスチャが表示エリアを均一に覆います。
2 つのシャドウ カスケードが使用されると、シャドウ ディスタンス全体がビューワ近くだと小さな塊に、遠くだと大きな塊に分割されます。 カスケードが 2 つの場合、ハード シャドウは次のように見えます。

距離が 50 で、2 つのカスケードのあるハード シャドウ。
パフォーマンスと引き換えに、近づくに連れ、シャドウの解像度が上がります。

カスケードが 2 つの場合、2 つのシャドウ テクスチャは、表示エリアの異なるサイズの部分を覆います。
最後に、4 つのカスケードを使用すると、シャドウ ディスタンスは、次第に 4 つの大きな部分に分割されます。 カスケードが 4 つの場合、ハード シャドウは次のように見えます。

距離が 50 で、4 つのカスケードのあるハード シャドウ。 これはすでに見ました!''

カスケードが 4 つの場合、4 つのシャドウ テクスチャは、表示エリアの異なるサイズの部分を覆います。
Shadow Distance は重要です!
Shadow Distanceは、Directional ライトのシャドウの画質およびパフォーマンスの両方にとって非常に重要です。 シャドウ カスケード カウントの数同様、シャドウ ディスタンスは、Quality Settings で設定でき、少ないハードウェアのパフォーマンスで、簡単にシャドウを縮小できます。
Shadow Distance の端でシャドウがフェードアウトし、そこから先では、オブジェクトにシャドウは投影されません。 ほとんどの場合、ゲーム内の一定の距離を超えたシャドウは目立たなくなります!
シャドウ カスケードがない場合、20 単位に設定されたハード シャドウとシャドウ ディスタンスは、下の写真のようになります。 シャドウは遠くではフェードアウトしますが、同時にシャドウの画質はカスケードなしで、距離が 50 単位の場合よりもはるかに上がります。

距離が 20 で、カスケードなしのハード シャドウ。
一方、シャドウ ディスタンスが高すぎると、シャドウの見た目は完全に悪くなります。 距離を 100 に設定すると、パフォーマンスと画質の両方が下がり、意味がなくなります。50 メートルより遠くなると、シーン内にオブジェクトはありません。

距離が 100 で、カスケードなしのハード シャドウ。 痛い!
カスケードのあるシャドウ マップは距離と共によりよく拡大します。 例えば、カメラの前で 300 単位をカバーする 4 つのカスケード ソフト シャドウは下の写真のように見えます。 このページの上にある写真よりも悪くなりますが、シャドーイング距離を 6x に増やすと、あまり悪くはなりません (当然、このシーンでは、シャドウ ディスタンスを高くしても意味はありません)。

距離が 300 で、4 つのカスケードのあるソフト シャドウ。
Page last updated: 2012-11-26Shadow Troubleshooting
This page lists solutions to common shadow problems.
I see no shadows at all!
- Shadows are a Unity Pro only feature, so without Unity Pro you won't get shadows. Simpler shadow methods, like using a Projector, are still possible of course.
- Shadows also require certain graphics hardware support. See Shadows page for details.
- Check if shadows are not completely disabled in Quality Settings.
Some of my objects do not cast or receive shadows
First, the Renderer has to have Receive Shadows on to have shadows on itself; and Cast Shadows on to cast shadows on other objects (both are on by default).
Next, only opaque objects cast and receive shadows; that means if you use built-in Transparent or Particle shaders then you'll get no shadows. In most cases it's possible to Transparent Cutout shaders (for objects like fences, vegetation etc.). If you use custom written Shaders, they have to be pixel-lit and use Geometry render queue. Objects using VertexLit shaders do not receive shadows either (but can cast shadows just fine).
Finally, in Forward rendering path, only the brightest directional light can cast shadows. If you want to have many shadow casting lights, you need to use Deferred Lighting rendering path.
Page last updated: 2012-08-18Shadow Size Details
Unity では、shadow map サイズは以下のように計算されます。
画面上の最初のライトの収束ボックスが計算されます。 これは、ライトが示す画面上の長方形になります。
- ディレクショナル ライトの場合、画面全体になります。
- スポット ライトの場合、画面に投影されるライトのピラミッドの跳ね返り長方形になります。
- ポイント ライトの場合、画面に投影されるライトの球体の跳ね返り長方形になります。
このボックスの幅と高さに大きい値が選択されます。それに、pixel size を呼び出します。
シャドウ解像度がHighの場合、シャドウ マップのサイズは次のようになります。
- ディレクショナル ライト:
NextPowerOfTwo( pixel size * 1.9 )、ですが、2048ほどです。 - スポット ライト:
NextPowerOfTwo( pixel size )、ですが、1024ほどです。 - ポイント ライト:
NextPowerOfTwo( pixel size * 0.5 )、ですが、512ほどです。
512MB 以上のビデオ メモリーのあるグラフィック カードの場合、シャドウ マップの上限は増えます (ディレクショナル ライトの場合、4096、スポット ライトの場合、2048、ポイント ライトの場合、1024)
シャドウ解像度がMiddleの場合、シャドウ マップのサイズはHighよりも 2 倍小さくなります。 シャドウ解像度がLowの場合、シャドウ マップのサイズはHighよりも 4 倍小さくなります。
ポイント ライトでは、シャドウにキューブマップを使用するため、明らかに制限が低くなります。 つまり、この解像度での、6 つのキューブマップの面をビデオ メモリに入れる必要があります。 これらはまた、潜在的なシャドウ キャスタを最大 6 つのキューブマップ面にレンダリングする必要があるため、レンダリングにかなりのコストがかかります。
メモリーの制限に近づいた場合のシャドウ サイズの計算
ビデオ メモリーの制限に近づくと、Unity は自動的に上記で計算されたシャドウ マップの解像度を落とします。
一般に画面のメモリー (バックバッファ、フロントバッファ、深さバッファ) はビデオ メモリーにある必要があります。レンダーテウkスチャのメモリーは、ビデオ メモリーにある必要があり、Unity はこの両方を使用して、シャドウ マップの許可されたメモリー 使用を決定します。 上記で計算されたサイズに応じて、シャドウ マップを配置する際、(TotalVideoMemory - ScreenMemory - RenderTextureMemory) / 3 に合うまで、そのサイズが減ります。
すべての通常のテクスチャ、頂点データおよびグラフィック オブジェクトがビデオ メモリーからスワップされると仮定すると、シャドウ マップが使用できる最大 VRAM は、(TotalVideoMemory-ScreenMemory-RenderTextureMemory) となります。 しかし、画面やレンダー テクスチャが使用する正確なメモリー量はわからないため、また一部のオブジェクトはスワップアウトできないため、すべてのテクスチャが常にスワップインおよびアウトする場合、パフォーマンスはひどいものになります。 そのため、Unity では、シャドウ マップが一般的に使用できるビデオ メモリーの 1/3 (非常に効果のある) を超えないようにしています。
IME Input
インプット メソッド エディター (IME) とは?
インプット メソッドは、ユーザーが入力機器にない文字や記号を入力することができる OS コンポーネントまたはプログラムです。 例えば、これによって、Westernキーボードのユーザーは、コンピュータ上で中国語、日本語、韓国語やインド語文字を入力できます。 携帯電話などの多くの携帯機器では、数字キーパッドを使用して、ラテン アルファベット文字を入力できます。
ターム インプット メソッドは一般に、キーボードを使用して、Cangjie method や pinyin method 、デッド キーの使用など、特定の言語を入力できる特定の方法のことを指します。
IME と Unity

Desktop

UnityはIMEサポートを提供しています。つまり、非 ASCII 文字をすべてのグラフィック ユーザー インターフェースを記述できます。 このインプット メソッドは、エンジンに完全に統合されているため、有効にするのに何もする必要はありません。 テストしたい場合は、キーボード言語を日本語などの非 ASCII 言語に変更して、インターフェースの記述を開始するだけです。
非 ASCII 文字の記述時における情報と最適化の詳細については、font properties のcharacterオプションをチェックしてください。
注意: Unity での IME は現在、Mac ウェブ プレイヤーではサポートされていません。

iOS
本機能は、iOS 機器でサポートされていません。

Android
本機能は、Android 機器でサポートされていません。
OptimizeForIntegratedCards
ポリゴン数は重要
ほとんどのグラフィック カードでは現在、ポリゴン数は重要ではありません。 共通の知識は、オブジェクト数やフィルレートがはるかに重要だということです。 残念ながら、ほとんどの古い統合チップ (Intel 945 / GMA 950 その他) では、そうではありません。 重要さは、頂点シェーダの複雑さまたはライティングおよび CPU の速度によります (その通りです。ほとんどの統合カードは、CPU で頂点を変形および照らします)。
Big Bang Brain Games は、1、2の頂点ごとのライトを使用して、ピクセル ライトを使用せず (基本的に、VertexLit rendering path)、シーン内で 25000 の三角形を超えることはありませんでした。 Quality Settings は、フレームレートのドロップ時に、自動的にパフォーマンスを加速化するのに使用されていました。 そのため、よりハイエンドなマシンの場合、ピクセル ライトが有効になっていたより高い質設定が使用されていました。
複雑な頂点シェーダと多くのポリゴンを使用して、オブジェクトを複数回使用すると、速度が下がります。 つまり、
- 可能な場合、VertexLit rendering path を使用します。 これにより、シーン内のライトの数に関係なく、各オブジェクトが 1 回だけ描画されます。
- 頂点ライトも含め、ライトを一切使わないでみてください。 ライトは、ジオメトリの移動またはライトの移動を検知します。 そうでない場合、 Lightmapper を使用して、照明をベークすると、高速になり、見た目もはるかに良くなります。
- ジオメトリを最適化します (下記参照)。
- Rendering Statistics ウィンドウおよび Profiler を使用します!
モデル ジオメトリの最適化
モデルのジオメトリの最適化に関して、次の 2 つの基本ルールがあります。
- 不要な場合は、過剰な数の面は使用しないでください。
- UV マッピング 継ぎ目とハード エッジの数をできるだけ抑えます。
グラフィック ハードウェアが処理する必要のある頂点の実際の数は通常、3D アプリケーションに表示されるものと同じではありません。 モデリング アプリケーションは通常、ジオメトリック頂点数、つまりモデルを構成する点の数を表示します。
しかし、グラフィック カードの場合、いくつかの頂点を個々の頂点に分割する必要があります。 頂点に複数の法線がある場合 (ハード エッジで) または複数の UV 座標がある場合、または、複数の頂点色がある場合、分割する必要があります。 そのため、Unity で見える頂点数はほとんどの場合、3D アプリケーションに表示されるものとは異なります。
ライティングのベーク
ライトマップまたは頂点色のいずれかにライティングをベークします。 Unity には、組み込みの Lightmapping View があり、多くの 3D モデリング パッケージでライトマップをベークできます。
ライトマップ化環境の生成処理は、Unity のシーンで単にライトを配置するよりも若干時間がかかりますが、
- ライトが多くある場合は特に、通常非常に速く実行されます。
- グローバル照明をベークできるため、見た目もはるかに良くなります。
次世代のゲームでも、ライトマッピングに大きく依存しています。 通常、ライトマップされた環境を使用し、リアルタイムの動的ライトは、1 つまたは 2 つしか使用しません。
Page last updated: 2012-11-13Web Player Deployment
When building a Web Player, Unity automatically generates an HTML file next to the player data file. It contains the default HTML code to load the web player data file.
It is possible to further tweak and customize the generated HTML file to make it fit better with the containing site's design, to add more HTML content, etc. The following pages discuss the related subjects in depth:
- Unity コンテンツをロードする HTML コード
- オブジェクトの取り扱い
- Unity ウェブ プレイヤー ロード画面
- Customizing the Unity Web Player's Behavior
- Unity Web Player and browser communication
- Using web player templates
- ウェブプレイヤーのストリーミング
HTML code to load Unity Web Player content
Unity コンテンツは、Unity Web Player プラグインによってブラウザにロードされます。 HTML コードは通常、このプラグインと直接通信せず、UnityObject というスクリプトを通じて通信します。 その主な作業は、ユーザーをブラウザやプラットフォーム固有の各種問題から保護することで、Unity コンテンツに非常にシンプルな作業を組み込みさせることです。 また、これによって、ウェブ プレイヤーを簡単にインストールできます。
ウェブ プレイヤー作成時に Unity が生成した HTML ファイルには、一般に必要とされる機能がすべて含まれます。 ほとんどの場合、HTML ファイルを編集する必要はありません。 本書の残りで、このファイルの内部での動作について説明します。
使用する前に、UnityObject スクリプトをロードする必要があります。 これは、<head>の上部で行われます。
<script type="text/javascript">
<!--
var unityObjectUrl = "http://webplayer.unity3d.com/download_webplayer-3.x/3.0/uo/UnityObject.js";
if (document.location.protocol == 'https:')
unityObjectUrl = unityObjectUrl.replace("http://", "https://ssl-");
document.write('<script type="text/javascript" src="' + unityObjectUrl + '"></script>');
-->
</script>
グローバルなunityObject変数を使用して、各種 Unity 関連の作業を行うことができます。 最も重要な作業は、Unity コンテンツを組み込むことです。 これは、複数のパラメータを受け入れるembedUnityメソッドを呼び出すことで行われます。 最初のパラメータは、Unity コンテンツに置換される HTML 要素のID]を指定します。 最も一般的である<div>のある HTML 要素が該当します。 Unity が置換する一時的なプレースホルダーと考えてください。 2 番目のパラメータは、表示されるウェブ プレイヤーへのパスを指定します。 次の 2 つのパラメータは、ウェブ プレイヤーの内容に対する表示エリアの幅と高さを指定します。 渡される値は、(600、450など) またはパーセント値 (50%、100%'') のいずれかになります。
unityObject.embedUnity("unityPlayer", "WebPlayer.unity3d", 600, 450);
最後に、HTML プレースホルダーは、<body>内に配置されます。 これは、<div id="unityPlayer" />のようにシンプルな形になります。 しかし、最大限の互換性を得るには、ブラウザが JavaScript をサポートしておらず、プレースホルダーが UnityObject で置換されない場合は、警告メッセージを配置するのがベストです。
<div id="unityPlayer"> <div class="missing"> <a href="http://unity3d.com/webplayer/" title="Unity ウェブ プレイヤーを インストールしてください!"> <img alt="Unity ウェブ プレイヤーを インストールしてください!" src="http://webplayer.unity3d.com/installation/getunity.png" width="193" height="63" /> </a> </div> </div>Page last updated: 2012-11-26
Working with UnityObject
UnityObject は、HTML への Unity コンテンツの組み込みを簡素化する JavaScript です。 これには、Unity Web Player プラグインを検出し、ウェブ プレイヤーのインストールを開始し、Unity のコンテンツを組み込む機能があります。 HTML ファイルと共に、「UnityObject.js」ファイルをウェブ サーバー上に配備しますが、「http://webplayer.unity3d.com/download_webplayer-3.x/3.0/uo/UnityObject.js」で直接 Unity からロードするのがベストです。 そのようにして、UnityObject のほとんどの最新版を常に参照します。 Unity サーバーがホストになっている「UnityObject.js」ファイルは、より小さくして、トラフィックを保存するために小さくされます。 ソース コードを探したい場合、Windows の「Data\Resources」フォルダおよびMac OS X の「Contents/Resources」フォルダに元のファイルがあります。UnityObject はデフォルトでは、匿名のデータをインストールのタイプと変換率を特定するのに便利な GoogleAnalytics に送信します。
関数
embedUnity
HTML に Unity のコンテンツを組み込みます。
「パラメータ:」
- id - Unity コンテンツに置換される HTML 要素 (プレースホルダー)。
- src - ウェブ プレイヤーのデータ ファイルへのパス。 相対値か絶対値のいずれかです。
- width - Unity コンテンツの幅。 ピクセル値 (「600」、「450」など) またはパーセント値 (「50%」、「100%」) のいずれかで指定できます。
- height - Unity コンテンツの高さ。 ピクセル値 (「600」、「450」など) またはパーセント値 (「50%」、「100%」) のいずれかで指定できます。
- params - 「オプション」。 パラメータのリストを含むオブジェクト。 値については、Customizing the Unity Web Player loading screen および Customizing the Unity Web Player's Behavior を参照してください。
- attributes - 「オプション」。 属性のリストを含むオブジェクト。 これらは、ブラウザに応じて、基本となる「<object>」または「<embed>」タグに追加されます。
- callback - 「オプション」。 ウェブ プレイヤーがロードされると呼び出される関数。 関数は、次のプロパティを含む 1 つの引数を受ける必要があります。
- success - 操作が成功したかどうかを示すブール値。
- id - ロードされたウェブ プレイヤー オブジェクトの識別子 (プレースホルダーと同じ)。
- ref - ロードされたウェブ プレイヤー オブジェクト。
「注意:」
この関数は通常、操作が完全に終了する前に戻ります。 従って、すぐにウェブ プレイヤー オブジェクトにアクセスするのは安全ではありません。 終了時に通知を得るため、コールバック関数を渡すことができます。 あるいは、「null」値が返されなくなるまで、「getObjectById」を繰り返し呼び出します。
「例:」
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Unity ウェブ プレイヤー | 例</title>
<script type="text/javascript" src="http://webplayer.unity3d.com/download_webplayer-3.x/3.0/uo/UnityObject.js"></script>
<script type="text/javascript">
<!--
if (typeof unityObject != "undefined") {
unityObject.embedUnity("unityPlayer", "Example.unity3d", 600, 450, null, null, unityLoaded);
}
function unityLoaded(result) {
if (result.success) {
var unity = result.ref;
var version = unity.GetUnityVersion("3.x.x");
alert("Unity ウェブ プレイヤーがロードされました!\nId: " + result.id + "\nVersion: " + version);
}
else {
alert("Unity ウェブ プレイヤーをインストールしてください!\nId:
}
}
-->
</script>
<head>
<body>
<!-- これは Unity のコンテンツに置換されます。 -->
<div id="unityPlayer">Unity コンテンツが再生できません。 JavaScript が組み込まれた互換性のあるブラウザを使用してください。</div>
<body>
</html>
getObjectById
ウェブ プレイヤー オブジェクトの検索
「パラメータ:」
- id - ウェブ プレイヤー オブジェクトの識別子。
- callback - 「オプション」。 ウェブ プレイヤーが検索されると呼び出される関数。 この関数は、ウェブ プレイヤー オブジェクトを含む 1 つのパラメータを受ける必要があります。
ウェブ プレイヤーがロードサれていない場合に、ウェブ プレイヤー オブジェクトまたは「null」を返します。
「例:」
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Unity ウェブ プレイヤー | 例</title>
<script type="text/javascript" src="http://webplayer.unity3d.com/download_webplayer-3.x/3.0/uo/UnityObject.js"></script>
<script type="text/javascript">
<!--
if (typeof unityObject != "undefined") {
unityObject.embedUnity("unityPlayer", "Example.unity3d", 600, 450, null, null, function(result) {
if (result.success) {
var versionButton = document.getElementById("versionButton");
versionButton.disabled = false;
}
});
}
function versionButtonClick() {
var unity = unityObject.getObjectById("unityPlayer");
var version = unity.GetUnityVersion("3.x.x");
alert(version);
}
-->
</script>
<head>
<body>
<!-- これは Unity のコンテンツに置換されます。 -->
<div id="unityPlayer">Unity コンテンツが再生できません。 JavaScript が組み込まれた互換性のあるブラウザを使用してください。</div>
<div>
<input id="versionButton" type="button" value="Version" disabled="disabled" onclick="versionButtonClick();" />
</div>
<body>
</html>
enableFullInstall
利用できない場合は完全なウェブ プレイヤーをインストールします。 通常、ウェブ プレイヤーのほんのわずかな部分がインストールされ、残りのファイルは後で自動的にダウンロードされます。 デフォルト値は、「false」です。
「パラメータ:」
- value - この機能を有効または無効にするブール値。
enableAutoInstall
利用できない場合は完全なウェブ プレイヤーを自動的に起動します。 一部のプラットフォームはこの機能をサポートしていません。 デフォルト値は、「false」です。
「パラメータ:」
- value - この機能を有効または無効にするブール値。
enableJavaInstall
Java ベースのインストールを有効にします。 一部のプラットフォームはこの機能をサポートしていません。 デフォルト値は、「true」です。
「パラメータ:」
- value - この機能を有効または無効にするブール値。
enableClickOnceInstall
ClickOnce ベースのインストールを有効にします。 一部のプラットフォームはこの機能をサポートしていません。 デフォルト値は、「true」です。
「パラメータ:」
- value - この機能を有効または無効にするブール値。
enableGoogleAnalytics
Unity にウェブ プレイヤーのインストールを通知します。 これは、ウェブ プレイヤーがインストール済みの場合は何もしません。 デフォルト値は、「true」です。
「パラメータ:」
- value - この機能を有効または無効にするブール値。
addLoadEvent
ウェブ ページがロードされると呼び出される関数を登録します。
「パラメータ:」
- event - ウェブ ページがロードされると呼び出される関数。 この機能には、パラメータを予期しません。
addDomLoadEvent
ウェブ ページの DOM がロードされると呼び出される関数を登録します。
「パラメータ:」
- event - ウェブ ページの DOMは がロードされると呼び出される関数。 この機能には、パラメータを予期しません。
Customizing the Unity Web Player loading screen
デフォルトで、Unity Web Player は、小さい Unity ロゴとウェブ プレイヤーの内容を表示する進捗バーを表示します。 ロゴと進捗バーの両方を含むロード画面の外観をカスタマイズできます。
ロード画面の編集は、Unity Pro でのみ行うことができます。
Unity ウェブ プレイヤー ロード画面の外観をカスタマイズに使用できる6 つのオプションのパラメータがあります。 以下のパラメータがあります。
- backgroundcolor: ロード中のウェブ プレイヤーの内容表示領域の背景色で、デフォルトは白。
- bordercolor: ロード中のウェブ プレイヤーの内容表示領域の1 ピクセルの境界色で、デフォルトは白。
- textcolor: エラー メッセージのテキストの色 (データ ファイルのロード失敗時など)。 背景色に応じて、デフォルトは黒か白になります。
- logoimage: カスタムのロゴ画像へのパスで、ロード中、ウェブ プレイヤー内容の表示領域内の中心にロゴ画像が描画されます。
- progressbarimage: ロード中に進捗バーとして使用されるカスタム画像へのパス。 進捗バー画像の幅は、完了したファイルのロード量に基づいて切り抜かれます。そのため、0 ピクセル幅で始まり、ロード完了時に、元の幅にアニメート化されます。 進捗バーはロゴ画像の下に描画されます。
- progressframeimage: ロード中に進捗バーを構成するのに使用されるカスタム画像へのパス。
渡されるすべての色値は、6 桁の 16 進数色 (FFFFFF, 020F16 など)。 渡される画像パスは相対リンクか絶対リンクのいずれかです。すべての画像ファイルは RGB (透明度なし) または RGBA (透明度あり) の 8 ビット/チャンネルの PNG ファイルである必要があります。 最後に、progressframeimageとprogressbarimageは同じ高さにする必要があります。
以下は、Unity ウェブ プレイヤーのロード画面の外観をカスタマイズするサンプルのスクリプトです。 背景色をライトグレー (A0A0A0)、境界色を黒 (000000)、テキスト色を白 (FFFFFF)、ローダ画像をMyLogo.png、MyProgressBar.png、MyProgressFrame.pngに設定します。 パラメータはすべて、1 つのparamsオブジェクトにグループ化され、unityObject.embedUnityメソッドに渡されます。
var params = {
backgroundcolor: "A0A0A0",
bordercolor: "000000",
textcolor: "FFFFFF",
logoimage: "MyLogo.png",
progressbarimage: "MyProgressBar.png",
progressframeimage: "MyProgressFrame.png"
};
var u = UnityObject2({ params: params });
u.initPlugin(jQuery("#unityPlayer")[0], "Example.unity3d");
See UnityObject2 for more details.
Example using the above snippet:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Unity Web Player | "Sample"</title>
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script>
<script type="text/javascript">
<!--
var unityObjectUrl = "http://webplayer.unity3d.com/download_webplayer-3.x/3.0/uo/UnityObject2.js";
if (document.location.protocol == 'https:')
unityObjectUrl = unityObjectUrl.replace("http://", "https://ssl-");
document.write('<script type="text\/javascript" src="' + unityObjectUrl + '"><\/script>');
-->
</script>
<script type="text/javascript">
var params = {
backgroundcolor: "A0A0A0",
bordercolor: "000000",
textcolor: "FFFFFF",
logoimage: "MyLogo.png",
progressbarimage: "MyProgressBar.png",
progressframeimage: "MyProgressFrame.png"
};
var u = new UnityObject2({ params: params });
u.observeProgress(function (progress) {
var $missingScreen = jQuery(progress.targetEl).find(".missing");
switch(progress.pluginStatus) {
case "unsupported":
showUnsupported();
break;
case "broken":
alert("You will need to restart your browser after installation.");
break;
case "missing":
$missingScreen.find("a").click(function (e) {
e.stopPropagation();
e.preventDefault();
u.installPlugin();
return false;
});
$missingScreen.show();
break;
case "installed":
$missingScreen.remove();
break;
case "first":
break;
}
});
jQuery(function(){
u.initPlugin(jQuery("#unityPlayer")[0], "Example.unity3d");
});
</script>
</head>
<body>
<p class="header">
<span>Unity Web Player | </span>WebPlayer
</p>
<div class="content">
<div id="unityPlayer">
<div class="missing">
<a href="http://unity3d.com/webplayer/" title="Unity Web Player. Install now!">
<img alt="Unity Web Player. Install now!" src="http://webplayer.unity3d.com/installation/getunity.png" width="193" height="63" />
</a>
</div>
</div>
</div>
<p class="footer">« created with <a href="http://unity3d.com/unity/" title="Go to unity3d.com">Unity</a> »</p>
</body>
Page last updated: 2012-11-26
WebPlayerBehaviorTags
The Unity Web Player allows developers to use a few optional parameters to easily control its behavior in a few ways:
- disableContextMenu: This parameter controls whether or not the Unity Web Player displays a context menu when the user right- or control-clicks on the content. Setting it to true prevents the context menu from appearing and allows content to utilize right-mouse behavior. To enable the context menu don't include this parameter.
- disableExternalCall: This parameter controls whether or not the Unity Web Player allows content to communicate with browser-based JavaScript. Setting it to true prevents browser communication and so content cannot call or execute JavaScript in the browser, the default is false.
- disableFullscreen: This parameter controls whether or not the Unity Web Player allows content to be viewed in fullscreen mode. Setting it to true prevents fullscreen viewing and removes the "Go Fullscreen" entry from the context menu, the default is false.
Using UnityObject2 you control those parameters like this:
var params = {
disableContextMenu: true
};
var u = UnityObject2({ params: params });
u.initPlugin(jQuery("#unityPlayer")[0], "Example.unity3d");
In the above example you'll notice that neither disableExternalCall nor disableFullscreen are specified, therefore their default values are used.
See UnityObject2 for more details.
Example setting all the behavior options:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Unity Web Player | "Sample"</title>
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script>
<script type="text/javascript">
<!--
var unityObjectUrl = "http://webplayer.unity3d.com/download_webplayer-3.x/3.0/uo/UnityObject2.js";
if (document.location.protocol == 'https:')
unityObjectUrl = unityObjectUrl.replace("http://", "https://ssl-");
document.write('<script type="text\/javascript" src="' + unityObjectUrl + '"><\/script>');
-->
</script>
<script type="text/javascript">
var params = {
disableContextMenu: true,
disableExternalCall: false,
disableFullscreen: false,
};
var u = new UnityObject2({ params: params });
u.observeProgress(function (progress) {
var $missingScreen = jQuery(progress.targetEl).find(".missing");
switch(progress.pluginStatus) {
case "unsupported":
showUnsupported();
break;
case "broken":
alert("You will need to restart your browser after installation.");
break;
case "missing":
$missingScreen.find("a").click(function (e) {
e.stopPropagation();
e.preventDefault();
u.installPlugin();
return false;
});
$missingScreen.show();
break;
case "installed":
$missingScreen.remove();
break;
case "first":
break;
}
});
jQuery(function(){
u.initPlugin(jQuery("#unityPlayer")[0], "Example.unity3d");
});
</script>
</head>
<body>
<p class="header">
<span>Unity Web Player | </span>WebPlayer
</p>
<div class="content">
<div id="unityPlayer">
<div class="missing">
<a href="http://unity3d.com/webplayer/" title="Unity Web Player. Install now!">
<img alt="Unity Web Player. Install now!" src="http://webplayer.unity3d.com/installation/getunity.png" width="193" height="63" />
</a>
</div>
</div>
</div>
<p class="footer">« created with <a href="http://unity3d.com/unity/" title="Go to unity3d.com">Unity</a> »</p>
</body>
Page last updated: 2012-11-26
Unity Web Player and browser communication
The HTML page that contains Unity Web Player content can communicate with that content and vice versa. Basically there are two communication directions:
- The web page calls functions inside the Unity web player content.
- The Unity web player content calls functions in the web page.
Each of these communication directions is described in more detail below.
Calling Unity web player content functions from the web page
The Unity Web Player object has a function, SendMessage(), that can be called from a web page in order to call functions within Unity web player content. This function is very similar to the GameObject.SendMessage function in the Unity scripting API. When called from a web page you pass an object name, a function name and a single argument, and SendMessage() will call the given function in the given game object.
In order to call the Unity Web Player's SendMessage() function you must first get a reference to the Unity web player object. You can use the GetUnity() function in the default html generated by Unity to obtain a reference to the object. Here is an example JavaScript function that would execute the SendMessage() function on the Unity web player; in turn SendMessage() will then call the function MyFunction() on the game object named MyObject, passing a piece of string data as an argument:
<script type="text/javascript" language="javascript">
<!--
//initializing the WebPlayer
var u = new UnityObject2();
u.initPlugin(jQuery("#unityPlayer")[0], "Example.unity3d");
function SaySomethingToUnity()
{
u.getUnity().SendMessage("MyObject", "MyFunction", "Hello from a web page!");
}
-->
</script>
Inside of the Unity web player content you need to have a script attached to the GameObject named MyObject, and that script needs to implement a function named MyFunction:
function MyFunction(param : String)
{
Debug.Log(param);
}
Note: keep in mind that if the function doesn't have any arguments, then an empty string ("") should be passed as an argument.
A single string, integer or float argument must be passed when using SendMessage(), the parameter is required on the calling side. If you don't need it then just pass a zero or other default value and ignore it on the Unity side. Additionally, the game object specified by the name can be given in the form of a path name. For example, where SomeChild must be a child of MyObject and MyObject must be at the root level due to the '/' in front of its name.
Note: u.getUnity() might return null if the game isn't fully loaded yet, so it's a good idea to check if it's value is not null before using SendMessage(). Or wait for your game to be fully loaded before trying to communicate with it.
Calling web page functions from Unity web player content
In order to call a web page function from within your Unity web player content you must use the Application.ExternalCall() function. Using that function you can call any JavaScript function defined in the web page, passing any number of parameters to it. Here is an example Unity script that uses the Application.ExternalCall() function to call a function named SayHello() found within the web page, passing a piece of string data as an argument:
Application.ExternalCall( "SayHello", "The game says hello!" );
The web page would need to define the SayHello() function, for example:
<script type="text/javascript" language="javascript">
<!--
function SayHello( arg )
{
// show the message
alert( arg );
}
-->
</script>
Executing arbitrary browser code from Unity web player content
You don't even have to define functions in the embedding web page, instead you can use the Application.ExternalEval() function to execute arbitrary browser code from the web player content.
The following example checks that the page embedding the web player content is fetched from a certain host (unity3d.com), if that's not the case then it will redirect to another URL. This technique can be used to prevent deep linking to your web player content:
Application.ExternalEval(
"if(document.location.host != 'unity3d.com') { document.location='http://unity3d.com'; }"
);
Page last updated: 2012-11-26
Using Web Player templates
When you build a webplayer project, Unity embeds the player in an HTML page so that it can be played in the browser. The default page is very simple, with just a white background and some minimal text. There are actually three different variations of this page which can be selected from the Player Settings inspector (menu: Edit > Project Settings > Player).

The built-in HTML pages are fine for testing and demonstrating a minimal player but for production purposes, it is often desirable to see the player hosted in the page where it will eventually be deployed. For example, if the Unity content interacts with other elements in the page via the external call interface then it must be tested with a page that provides those interacting elements. Unity allows you to supply your own pages to host the player by using webplayer templates.
Structure of a Webplayer Template
Custom templates are added to a project by creating a folder called "WebPlayerTemplates" in the Assets folder - the templates themselves are sub-folders within this folder. Each template folder contains an index.html or index.php file along with any other resources the page needs, such as images or stylesheets.

Once created, the template will appear among the options on the Player Settings inspector. (the name of the template will be the same as its folder). Optionally, the folder can contain a file named thumbnail.png, which should have dimensions of 128x128 pixels. The thumbnail image will be displayed in the inspector to hint at what the finished page will look like.
Template Tags
During the build process, Unity will look for special tag strings in the page text and replace them with values supplied by the editor. These include the name, onscreen dimensions and various other useful information about the player.
The tags are delimited by percent signs (%) in the page source. For example, if the product name is defined as "MyPlayer" in the Player settings:-
<title>%UNITY_WEB_NAME%</title>
...in the template's index file will be replaced with
<title>MyPlayer</title>
...in the host page generated for the build. The complete set of tags is given below:-
UNITY_WEB_NAME
Name of the webplayer.
UNITY_WIDTH
UNITY_HEIGHT
Onscreen width and height of the player in pixels.
UNITY_WEB_PATH
Local path to the webplayer file.
UNITY_UNITYOBJECT_LOCAL
A browser JavaScript file called UnityObject2.js is generally used to embed the player in the host page and provide part of the interaction between Unity and the host's JavaScript. This is normally supplied to a page by downloading from Unity's website. However, this requires an internet connection and causes problems if the page is to be deployed offline from the user's hard drive. This tag provides the local path to the UnityObject.js file, which will be generated if the Offline Deployment option is enabled in the Build Settings.
UNITY_UNITYOBJECT_URL
In the usual case where the page will download UnityObject2.js from the Unity's website (ie, the Offline Deployment option is disabled), this tag will provide the download URL.
UNITY_UNITYOBJECT_DEPENDENCIES
The UnityObject2.js have dependencies and this tag will be replaced with the needed dependencies for it to work properly.
UNITY_BETA_WARNING
If the webplayer has been built with a beta version of Unity, this tag will be replaced with a short warning message about the fact. Otherwise, it is replaced with nothing.
UNITY_CUSTOM_SOME_TAG
If you add a tag to the index file with the form UNITY_CUSTOM_XXX, then this tag will appear in the Player Settings when your template is selected. For example, if something like
<title>Unity Web Player | %UNITY_CUSTOM_MYTAG%</title>
...is added to the source, the Player Settings will look like this:-

The textbox next to the tag's name contains the text that the custom tag will be replaced with during the build.
Example
To illustrate the use of the template tags, here is the HTML source that Unity uses for its default webplayer build.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Unity Web Player | %UNITY_WEB_NAME%</title>
%UNITY_UNITYOBJECT_DEPENDENCIES%
<script type="text/javascript">
<!--
var unityObjectUrl = "%UNITY_UNITYOBJECT_URL%";
if (document.location.protocol == 'https:')
unityObjectUrl = unityObjectUrl.replace("http://", "https://ssl-");
document.write('<script type="text\/javascript" src="' + unityObjectUrl + '"><\/script>');
-->
</script>
<script type="text/javascript">
<!--
jQuery(function() {
var config = {
width: %UNITY_WIDTH%,
height: %UNITY_HEIGHT%,
params: %UNITY_PLAYER_PARAMS%
};
var u = new UnityObject2(config);
var $missingScreen = jQuery("#unityPlayer").find(".missing");
var $brokenScreen = jQuery("#unityPlayer").find(".broken");
$missingScreen.hide();
$brokenScreen.hide();
u.observeProgress(function (progress) {
switch(progress.pluginStatus) {
case "broken":
$brokenScreen.find("a").click(function (e) {
e.stopPropagation();
e.preventDefault();
u.installPlugin();
return false;
});
$brokenScreen.show();
break;
case "missing":
$missingScreen.find("a").click(function (e) {
e.stopPropagation();
e.preventDefault();
u.installPlugin();
return false;
});
$missingScreen.show();
break;
case "installed":
$missingScreen.remove();
break;
case "first":
break;
}
});
u.initPlugin(jQuery("#unityPlayer")[0], "%UNITY_WEB_PATH%");
});
-->
</script>
<style type="text/css">
<!--
body {
font-family: Helvetica, Verdana, Arial, sans-serif;
background-color: white;
color: black;
text-align: center;
}
a:link, a:visited {
color: #000;
}
a:active, a:hover {
color: #666;
}
p.header {
font-size: small;
}
p.header span {
font-weight: bold;
}
p.footer {
font-size: x-small;
}
div.content {
margin: auto;
width: %UNITY_WIDTH%px;
}
div.broken,
div.missing {
margin: auto;
position: relative;
top: 50%;
width: 193px;
}
div.broken a,
div.missing a {
height: 63px;
position: relative;
top: -31px;
}
div.broken img,
div.missing img {
border-width: 0px;
}
div.broken {
display: none;
}
div#unityPlayer {
cursor: default;
height: %UNITY_HEIGHT%px;
width: %UNITY_WIDTH%px;
}
-->
</style>
</head>
<body>
<p class="header"><span>Unity Web Player | </span>%UNITY_WEB_NAME%</p>%UNITY_BETA_WARNING%
<div class="content">
<div id="unityPlayer">
<div class="missing">
<a href="http://unity3d.com/webplayer/" title="Unity Web Player. Install now!">
<img alt="Unity Web Player. Install now!" src="http://webplayer.unity3d.com/installation/getunity.png" width="193" height="63" />
</a>
</div>
<div class="broken">
<a href="http://unity3d.com/webplayer/" title="Unity Web Player. Install now! Restart your browser after install.">
<img alt="Unity Web Player. Install now! Restart your browser after install." src="http://webplayer.unity3d.com/installation/getunityrestart.png" width="193" height="63" />
</a>
</div>
</div>
</div>
<p class="footer">« created with <a href="http://unity3d.com/unity/" title="Go to unity3d.com">Unity</a> »</p>
</body>
</html>
Page last updated: 2012-11-26
Web Player Streaming
ストリーミング ウェブ プレイヤーは、エンド ユーザーに素晴らしいウェブ ゲーム体験を提供するために重要です。 ウェブ ゲームの裏にある考えは、進捗バーを待つ代わりに、ユーザーがほぼすぐに内容を確認し、できる限りすぐにゲーム プレイを開始できるということです。 これは、簡単に達成できます。以下で方法について説明します。
ポータルのための調整
本項では、主にオンライン ゲーム ポータルのパブリッシュについて説明します。 ストリーミングはあらゆる種類のコンテンツに便利で、その他多くのシチュエーションに簡単に適用できます。
オンライン ゲーム ポータルは、ある形態のゲーム プレイが多くても 1 MB のデータをダウンロード後に本当に開始されることを期待します。 到達しない場合、ポータルがコンテンツを受け入れる可能性は低くなります。 ユーザーの観点から言えば、ゲームは素早く開始する必要があります。 そうでない場合、時間が無駄になり、プレイヤーがウィンドウを閉じてしまうかもしれません。
128 キロビットのケーブル接続では、毎秒 16KB または毎分 1MB をダウンロードできます。 これはオンライン ポイントが対象としている帯域幅の低い側です。
ゲームは以下の内容をストリームするよう、最適に設定されます。
- 50 KB は、ロゴとメニューを表示します (4 秒)
- 320 KB により、ユーザーは小さいチュートリアル レベルを再生またはメニューで楽しい相互作用を行うことができます (20 秒)
- 800 KB により、ユーザーは最初の小さいレベルを再生できます (50 秒)
- 1-5 MB 内でゲーム全体のダウンロードを終了します (1〜5 分)
覚えておくべきことは、遅い接続でユーザーにとっての待ち時間で考えるということです。 決してプレイヤーを待たせてはいけません。
ウェブ プレイヤーが現在、10MB でも慌てないでください。 これを最適化するのは気が遠くなるように思えますが、通常わずかな量力で極めて簡単にこの方法によりゲームを構築できます。 上記の各手ステップは個別のシーンと考えてください。 ゲームを作成したら、すでに難しい部分は終了しました。 このローディング コンセプト周辺でシーンを構築するのは非常に簡単です!
ビルド後またはビルド中に、コンソール ログを開くと (コンソール ウィンドウ (Desktop Platforms) の、 メニュー OSX)、個々のシーン ファイルのサイズを確認できます。 コンソールは以下の内容を表示します。
***プレイヤー サイズ統計*** Level 0「メイン メニュー」は、圧縮された 95.0 KB を使用します。 Level 1「キャラクター ビルダー」は、圧縮された 111.5 KB を使用します。 Level 2「レベル 1」は、圧縮された 592.0 KB を使用します。 Level 3「レベル 2」は、圧縮された 2.2 MB を使用します。 Level 4「レベル 3」は、圧縮された 2.3 MB を使用します。 合計圧縮サイズ 5.3 MB。 合計解凍サイズ 9.9 MB。
このゲームは、更にもう少し最適化を使用できます! 詳細については、reducing file size page を参照してください。
最も重要なステップ
- 最初にメニューをロードします。 アニメート化されたロゴを表示することで、意識されることなく時間が経過し、ダウンロードを更に進めることができます。
- 最初のレベルは短くし、多くのアセットを使用しないこと。 このようにして、最初のレベルは素早くダウンロードでき、占有しているプレイヤーに 1、2 分プレイを続けさせることで、残っているアセットのすべてのダウンロードがバックグラウンドで完了するようにできます。 ユーザーゲームのコントロールを学ぶことができるミニ チュートリアル レベルを用意してみては? ここでの高レス テクスチャまたはオブジェクトのロード、最初のレベルですべての敵を配置する理由はありません。 最も低いポリゴン数のものを使用します。 そう、これは、ウェブ プレイヤーでゲームをデザインする必要がある場合があるということです。
- ゲーム開始時にすべての音楽が利用できる理由はありません。 音楽を外部に置き、WWW クラスを介してロードします。 Unity は、高品質コーデック、Ogg Vorbis でオーディオを圧縮します。 しかし、圧縮されても、オーディオは多くの容量を使用します。3MB に合わせる場合、5 分間の音楽の場合は、世界での圧縮はすべて時間を省きません。 犠牲が必要です。 更に音楽がダウンロードされるまでループ再生できる非常に短いトラックをロードします。 プレイヤーが最初のレベルでフックされた場合にのみより多くの音楽をロードします。
- インポート設定を使用してテクスチャを最適化します。 音楽を外部に置いた後、テクスチャは簡単にゲームの 90% を使用します。 通常のテクスチャのサイズは、ウェブでの配備には大きすぎます。 小さいブラウザ ウィンドウ、大きいテクスチャは視覚的信頼性を増しません。 必要な大きさしかないテクスチャを使用してください (ここでより多くの犠牲の準備ができました)。 テクスチャ解像度を半分にすると、実際、テクスチャのサイズは 1/4 になります。 当然、すべてのテクスチャは、DXT 圧縮する必要があります。
- 一般にウェブ プレイヤーのサイズを減らします。 Unity が提供するファイル サイズの最適化のためのユーティリティ用のマニュアル ページは、here Unity は、通常ゲーム データを圧縮されていないサイズの 1/2 から 1/3 の間に圧縮する最新の LZMA ベース圧縮を使用していますが、できることはすべて試す必要があります。
- Resources.Load を避けてみてください。 Resources.Load は非常に便利ですが、Unity は、スクリプトがリソースをロードしようとするため、Resources.Load 使用時に、最初に使用されるタイミングでアセットを順序付けることはできません。 「`First Streamed Level With Resources」プロパティを使用して、 で、Resources.Load を通じてロードできるすべてのアセットを含めるレベルを設定できます。 明らかに、ゲームにできるだけ後で Resources.Load アセットを移動したいか、またはこの機能を使用したくないことでしょう。
ストリーミング ウェブ プレイヤーのパブリッシュ
Unity でのストリーミングは、レベル ベースで、これを設定するための簡単なワークフローがあります。 内部では、Unity はアセットの追跡や圧縮データ ファイル内でのその最適な配置、使用する最初のシーンによる順序付けなどの汚い仕事をすべて行なっています。 単純に、ビルド設定での最初のレベルが使用するアセットの数をできるだけ少なくする必要があります。 これは、当然「メニュー レベル」が目的ですが、よいウェブ体験をのためには、プレイヤーがプレイしようとしている最初の実際のゲーム レベルも小さくする必要があります。
Unity でストリーミングを使用するには、ビルド設定で「Web Player Streamed」を選択します。 次に、最初のレベルによって使用されるすべてのアセットがロードサれるとすぐに、コンテンツが自動で開始されます。 「メニュー レベル」を 50-100 KB ぐらいに維持してみてください。 ストリームは可能な限り高速でロードし続けますが、一方でリアルタイムで解凍を行います。 ビルド中/後にコンソールを見ると、どのくらいの大きさかが分かります。
レベル別にストリームの進捗を問い合わせることができ、レベルが利用可能になると、ロードできるようになります。 進捗バーを表示するには、GetStreamProgressForLevel を使用し、特定のレベルをロードするのにすべてのデータが利用できるかをチェックするには、CanStreamedLevelBeLoaded を使用します。
このストリーミングの形式は当然線形で、これはほとんどの場合でのゲームがどのように機能するかに一致します。 これでは不十分なことがあります。 そのため、WWW クラスを使用して、.unity3d ファイルをロードできるよう、Unity では API も提供しています。 ビデオやオーディオもストリーミングされ、動画を最初にダウンロードする必要なく、ほぼすぐに再生できます。 最後に、テクスチャは、ゲームが依存しているかもしれないテクスチャまたはバイナリ データ同様、WWW クラスを介して簡単にダウンロードできます。
Page last updated: 2012-11-13Reference
Refer to the information on these pages for details on working in-depth with various aspects of Unity.
The Unity Manual Guide contains sections that apply only to certain platforms. Please select which platforms you want to see. Platform-specific information can always be seen by clicking on the disclosure triangles on each page.
- Components
- 経路探索(Pathfinding)
- アニメーション コンポーネント
- Assets Component
- オーディオ コンポーネント
- 物理コンポーネント
- GameObject
- 画像効果
- Antialiasing
- Bloom
- Camera Motion Blur
- Depth of Field
- Noise And Grain
- Screen Overlay
- Color Correction Lookupテクスチャ
- Bloom and Lens Flares
- Color Correction Curves
- Contrast Enhance
- Crease
- Depth of Field 3.4
- Tonemapping
- Edge Detectエフェクト 法線マップ
- Fisheye画像効果
- Global Fog
- Sun Shaft
- Tilt Shift
- Vignetting (および Chromatic Aberration)
- Blur
- Color Correction画像効果
- Contrast Stretch画像効果
- Edge Detect効果
- Glow画像効果
- Grayscale 画像効果
- Motion Blur 画像効果
- Noiseイメージエフェクト
- Sepia Toneイメージエフェクト
- スクリーンスペース アンビエントオクルージョン(SSAO)イメージエフェクト
- Twirl 画像効果
- Vortex 画像効果
- マネージャー設定
- メッシュ コンポーネント
- ネットワーク グループ
- エフェクト
- コンポーネントのレンダリング
- トランスフォーム コンポーネント
- UnityGUI グループ
- ウィザード
- Terrain Engineガイド
- 樹木作成ガイド(Tree Creator Guide)
- アニメーションビューの手引き(Animation View Guide)
- GUI スクリプティング ガイド
- ネットワーク リファレンス ガイド
- 内蔵シェーダガイド
- シーン上の裏で動いている Unity のレンダリング
- SL-Reference
- Surface Shadersの記述
- 頂点シェーダとFragmentシェーダのプログラミング
- ShaderLab syntax: Shader
- 詳細な ShaderLab トピック
- ShaderLab builtin values
- Scripting Concepts
Components
- 経路探索(Pathfinding)
- アニメーション コンポーネント
- Assets Component
- オーディオ コンポーネント
- 物理コンポーネント
- GameObject
- 画像効果
- Antialiasing
- Bloom
- Camera Motion Blur
- Depth of Field
- Noise And Grain
- Screen Overlay
- Color Correction Lookupテクスチャ
- Bloom and Lens Flares
- Color Correction Curves
- Contrast Enhance
- Crease
- Depth of Field 3.4
- Tonemapping
- Edge Detectエフェクト 法線マップ
- Fisheye画像効果
- Global Fog
- Sun Shaft
- Tilt Shift
- Vignetting (および Chromatic Aberration)
- Blur
- Color Correction画像効果
- Contrast Stretch画像効果
- Edge Detect効果
- Glow画像効果
- Grayscale 画像効果
- Motion Blur 画像効果
- Noiseイメージエフェクト
- Sepia Toneイメージエフェクト
- スクリーンスペース アンビエントオクルージョン(SSAO)イメージエフェクト
- Twirl 画像効果
- Vortex 画像効果
- マネージャー設定
- メッシュ コンポーネント
- ネットワーク グループ
- エフェクト
- コンポーネントのレンダリング
- トランスフォーム コンポーネント
- UnityGUI グループ
- ウィザード
comp-AIGroup
本項ではUnityの経路探索の機能をカバーし、二つの地点間で障害物をさけて最適な経路を移動するプロセスを説明します。
Page last updated: 2012-11-10class-NavMeshAgent
ナビメッシュエージェント(NavMesh Agent)コンポーネントは経路探索(Pathfinding)と連動して使用し、このエージェントがナビメッシュ を移動する際の変数を定義する。使用の際はメニューのを選択します。

| Radius | エージェントの半径(経路探索の目的のみで使用され、実際のオブジェクトの半径と一致せずやや大きくなります). |
| Speed | エージェントが目的地まで移動する速度の最大値 |
| Acceleration | 加速度の最大値 |
| Angular Speed | 回転速度(角度/秒)の最大値 |
| Stopping distance | 停止距離。 エージェントは目的地までの距離がこの範囲に収まると減速を始めます |
| Auto Traverse OffMesh Link | オフメッシュリンク(OffMesh Link)からの出入りの動作を自動化します |
| Height | エージェントの高さ(デバッグ時の描画で使用) |
| Base offset | 縦方向において、実際のオブジェクトとコライダ(Collider)の差分の距離 |
| Obstacle Avoidance Type | 回避動作の品質レベル |
| NavMesh Walkable | エージョントがその上を移動できるナビメッシュレイヤ のタイプ |
(ナビゲーションと経路探索 に戻る)
Page last updated: 2012-11-09class-OffMeshLink
本項では主に手動のOff-mesh links(Off-mesh Link)に関する記載であり、オフメッシュ(Offmesh)コンポーネントを用いて手動でセットアップしたケースについて説明します。自動的に生成されるOff-mesh links(Off-mesh Link)についてはナビメッシュ入門 を併せて参照のこと。

ナビメッシュスタティック(navmesh static)の配置が互いに接続されず、結果的にエージェントがある空間から別の空間へ移動することが出来ないことがありえます。
この課題を解消するためにUnityにはOff-mesh links(Off-mesh Link)という仕組みを持っています。

Off-mesh links コンポーネント
Off-mesh linksはどのようなオブジェクトにもアタッチすることが可能であり、次のようなプロパティを持つ:
| Start | Off-mesh linksの開始位置を示すオブジェクト |
| End | Off-mesh linksの終了位置を示すオブジェクト |
| Cost Override | 正の値の場合、経路リクエストを処理する際に経路コストを算出するのに用います。正の値でない場合、デフォルトのコストが適用される(ゲームオブジェクトが所属するレイヤーのコスト)。 |
Cost Overrideが3.0に設定した場合、Off-mesh links上を経路とした場合、通常のナビメッシュエリアを経路とした場合と比べて3倍のコストがかかります。このプロパティはランタイムで修正可能であり、焼きこみ(re-bake)を行う必要はありません。
| Bi Directional | オンに設定した場合、Off-mesh linksは双方向に移動することが出来ます。オフに設定した場合、開始(Start)から終了(End)の方向しか移動できません |
| Activated | このリンクを経路探索で使用するかのフラグ。このプロパティがfalseの場合、Off-mesh linksは無視されます。このプロパティはランタイムで修正可能であり、焼きこみ(re-bake)を行う必要はありません。 |
Off-mesh linksのプロパティに関する特記事項
「Activated」と「Cost Override」のプロパティはランタイムで修正可能であり、 ただちに適用されます。他のプロパティは適用するためにNavMeshの焼きこみ(re-bake)が必要です。
開始(Start)あるいは終了(End)のトランスフォームが焼きこみ(baking)の際に設定されていない場合、または開始(Start)あるいは終了(End)のトランスフォームがナビメッシュから距離が離れすぎていて有効なポジションが見つけられない場合、Off-mesh linksは生成されません。この場合、エラーがコンソールウィンドウに表示されます。
(ナビゲーションと経路探索 に戻る)
Page last updated: 2012-11-26class-NavMeshObstacle
固定された障害物をナビメッシュ(NavMesh)上でセットアップすることができ、焼きこみ(baking)の際に設定します。しかし移動する障害物も設定することが可能であり、動きまわるエージェントにこれらを回避させるよう設定が出来ます。障害物はナビメッシュ(NavMesh)コンポーネントを用いて指定することが可能です。どのようなゲームオブジェクトにもアタッチすることが可能であり、アタッチしたオブジェクトの移動にあわせて追従します。

| Radius | ナビメッシュ障害物の円柱半径 |
| Height | ナビメッシュ障害物の円柱高さ |
comp-AnimationGroup
- アニメーション
- アニメーションクリップ
- アニメーター コンポーネント
- Animator Controller
- Avatar作成
- class-AvatarBodyMask
- アバター スケルトンマスク
- ヒューマン テンプレートファイル
- class-State
- class-Transition
class-Animation

The Animation Inspector
Properties
| Animation | デフォルトのアニメーション。「Play Automatically」がオンの際に再生される |
| Animations | スクリプトからアクセス可能なアニメーションのリスト |
| Play Automatically | オンの場合はゲーム開始時に自動再生 |
| Animate Physics | オンの場合は物理エンジン(Physics)と連動 |
| Culling Type | どの場合はアニメーション再生を行わないか設定 |
| Always Animate | 常に再生 |
| Based on Renderers | デフォルトのアニメーションポーズの場合は再生 |
| Based on Clip Bounds | インポートの際に計算されるクリップの境界に基づいて再生し、クリップの境界がビューの外にある場合は再生しない |
| Based on User Bounds | ユーザ定義に基づいた境界にもとづいて再生し、ユーザが定義した境界がビューの外にある場合は再生しない |
Unity上でアニメーションを作成する方法の詳細については アニメーションビューガイド を参照のこと。 アニメーション付きのキャラクターをインポートする方法についてはアニメーションインポート あるいはカスタムのスクリプティングについてはアニメーションスクリプティング を参照のこと。
Page last updated: 2012-11-06class-AnimationClip

インポートしたアニメーションをProject Viewに表示
Animation Clipはアニメーションキャラクターや簡単なアニメーションに使用できるアニメーションの情報をすべて格納します。
プロパティはサンプルレートのみで変更不可能です。クリップが作成されたときのサンプルレートが格納されています。Unityはアニメーションをインポートするときにキーフレームの自動削減を行いますのでサンプルレートはキー数と一致しないことに留意して下さい。
個々のアニメーションクリップのさまざまなインポート設定の詳細については、メッシュコンポーネント リファレンス を参照してください.
注意 インポートされたアニメーションはアニメーションビューで編集することはできませんが、Unity上でアニメーションの複製を行った場合は、複製したほうは編集することができます。
Page last updated: 2012-11-26class-Animator
アバターを持っている任意のゲームオブジェクトは’’アニメーターコンポーネント’’を持つことになり、またキャラクターとその動作の関係が定義されます。

アニメーターコンポーネントはアニメーターコントローラを参照し、これによりキャラクタの動作を設定します。ステートマシン , ブレンドツリー およびスクリプトから制御可能なイベントも設定できます。
プロパティ
| Controller | このキャラクターにアタッチされたアニメーターコントローラ |
| Avatar | このキャラクターのアバター |
| Apply Root Motion | キャラクターの位置をアニメーション自体から制御するかスクリプトから制御するかどうか |
| Animate Physics | アニメーションが物理挙動と連動する必要があるか |
| Culling Mode | アニメーションのカリングモード |
| Always animate | つねにアニメーションし、カリングを行わない |
| Based on Renderers | レンダラが非表示の場合ルートモーションのみがアニメーションされます。キャラクターが非表示の場合、体の他の部分はすべてスタティックとなります。 |
class-AnimatorController
Animator Controllerビューからキャラクター動作を表示、セットアップすることができます。(メニューからを選択)
Animator ControllerはProject Viewから作成することができます。(メニューからを選択)
これにより .controllerアセットがディスク上に作成され、Project Browserで次のように表示されます。

ディスク上のAnimator Controllerアセット
ステートマシンのセットアップが行われた後、Hierarchy ViewでAvatarで任意のキャラクターのAnimatorコンポーネント上にコントローラをドラッグ&ドロップすることができます。

Animator Controllerウィンドウは以下を含みます:
- Animation Layerウィジェット (左上隅、Animation Layers を参照)
- Event Parameters ウィジェット (左下、 Animation Parameters を参照)
- ステートマシン自体 の表示。
Animator Controller ウィンドウは現在読み込まれているかのシーンが何であるかにかかわらず、常に最近選択された.controllerアセットからステートマシンを表示することに注意して下さい。
class-Avatar
FBXファイルをインポートした後、FBX importerオプションのRigタブでリグの指定をすることができます。
Humanoidアニメーション
Humanoidリグの場合、を選択しをクリックします。MecanimはAvatarのボーン構造に現在のボーン構造のマッチングを試みます。多くの場合、リグのボーンのつながりを正しく分析し、ほぼ自動作業となります。
マッチングが成功した場合は、メニューの横にチェックマークが表示されます。
また、マッチングが成功した場合には、FBXのアセットにAvatarの子アセットが追加されたことがプロジェクト・ビュー階層にて表示されます。
Avatarの子アセットがある場合とない場合のモデル
インスペクタ上のAvatarアセット
MecanimがAvatarを作成することができなかった場合は、ボタンの横に×印が表示され、アバターの子アセットが追加されません。これが起こるときには、Avatarを手動で設定する 必要があります。
!非Humanoid アニメーション
<<<<<<< HEAD 非Humanoidアニメーションのための2つのオプション(GenericおよびLegacy)が用意されています。GenericアニメーションはMecanimを使用してインポートが出来ますが、その際にHumanoidアニメーションで使用できるいくつかのすぐれた追加機能を利用できません。LegacyアニメーションはMecanim登場以前にUnityで提供されていたアニメーションシステムを使用しています。従来のアニメーションもでまだ有用なケースはありますが(特にあなたが完全にはアップデートしたくない過去プロジェクトを含む場合)、新規プロジェクトではほぼ必要ありません。Legacyアニメーションの詳細については、マニュアルのこのセクション を参照してください。 ======= 非ヒューマノイドアニメーションのための2つのオプション(GenericおよびLegacy)が用意されています。Genericアニメーションはメカニムを使用してインポートが出来ますが、その際にヒューマノイドアニメーションで使用できるいくつかのすぐれた追加機能を利用できません。Legacyアニメーションはメカニム登場以前にUnityで提供されていたアニメーションシステムを使用しています。従来のアニメーションもでまだ有用なケースはありますが(特にあなたが完全にはアップデートしたくない過去プロジェクトを含む場合)、新規プロジェクトではほぼ必要ありません。Legacyアニメーションの詳細については、マニュアルのこのセクション を参照してください。 > 7666ec2514acb2daf64e9ee61e3f7098c24d3470
(Avatar作成およびセットアップ に戻る)
(Mecanim紹介 に戻る)
Page last updated: 2012-10-18class-AvatarBodyMask
delete english ニメーションでBody Maskと呼ばれるものを使用して、特定の体の部分を選択的に有効または無効にすることができます。Body MaskはメッシュインポートインスペクタのAnimationタブとAnimation Layers で使用されています。 Body Maskによって、キャラクターの特定の要件に合わせてアニメーションを詳細にカスタマイズできます。たとえば、腕と脚の動きの両方を含む標準的な歩行アニメーションがあったとして、キャラクターが両手で大きな物体を運んでいる場合は、歩行中に腕が大きくスイングするのは不自然です。ただし、Body Maskで腕の動きをオフにすることで、標準の歩行アニメーションを活用することができます。
ボディパーツに含まれるのは、頭、左腕、右腕、左手、右手、左足、右足とルート(足の下の影部分)です。 Body Maskでは、手や足でインバースキネマティクス(IK)を切り替えることでIK曲線をアニメーションに含めるか決定することができます。
インスペクタ上の'Body Mask(腕を除く)
メッシュインポートインスペクタのアニメーションタブでは、Clipsというリストがあり、オブジェクトのすべてのアニメーションクリップが含まれています。このリストから項目を選択すると、Body Maskエディタを含め、アニメーションクリップに対して設定できるオプションが表示されます。
またBody Maskのアセット作成(メニューでを」選択)により.mask拡張子のファイルが作成されます。
Body Maskは、Animation Layers を指定する際にアニメータコントローラ で再利用することができます。
Body Maskを使用することの利点は、これらはアクティブではないボディパーツがそれに関連付けられたアニメーションカーブを必要としないため、メモリのオーバーヘッドを減少させやすい、ということです。さらに、未使用のアニメーションカーブは再生中に計算する必要がないためアニメーションによるCPUオーバーヘッドを削減しやすくなります。
(メカニム紹介 に戻る)
Page last updated: 2012-11-26class-AvatarSkeletonMask
アバター ボディマスクとほぼ同じ、ただしジェネリックアニメーションに用いられる。
Page last updated: 2012-11-09class-HumanTemplate
アバターにスケルトンのボーンマッピングはヒューマン テンプレートファイルとしてディスク上に保存することができます(拡張子*.ht)。このファイルにより、同じマッピングを使用する任意のキャラクターでボーンマッピングを再利用できます。たとえば、アニメーターで一貫性のあるレイアウトと一貫性のあるスケルトン命名規則を使用したものの、メカニムがそれを解釈できないケースに有効です。
各モデルで.htファイルをすることで、手動での再マッピングは一回だけで済ませることができる。
Page last updated: 2012-11-09class-State
Animation State
Animation StateはAnimation State Machinesの基本構成要素です。各ステート(状態)は、個々のアニメーションシーケンス(またはブレンドツリー)が含まれていて、キャラクターがそのステートの時に再生されます。ゲーム内のイベントで、ステート遷移をトリガすると、キャラクターは新しいステートにに移行し、対応するアニメーションシーケンスに動作が遷移します。
アニメーターコントローラーのステートを選択すると、インスペクタ上で、そのステートに対応するプロパティが表示されます。:-

| Speed | アニメーションのデフォルトの速度 |
| Motion | ステートに割り当てられているアニメーションクリップ |
| Foot IK | ステートで足のIKを有効にするか |
| Transitions | ステートの遷移先ステート一覧 |
茶色で表示されるデフォルトのステートは、最初に起動されたときのステートです。デフォルトの状態を変更したい場合は、別のステート上で右クリックし、コンテキストメニューからを選択します。各遷移上soloおよびmuteのチェックボックスはAnimation Viewの動作を制御するために使用されています。詳細はこのページ を参照のこと。
新しいステートの追加時はがAnimator Controller Windowのどこかを右クリックし、コンテキストメニューでを選択します。別の方法としては、AnimatorControllerWindowにアニメーションをドラッグすることで、そのアニメーションを含むステートを作成することが出来ます。(コントローラーにはメカニムアニメーションをドラッグできることに留意してください。 非メカニムアニメーションはリジェクトされます。)ステートはブレンドツリー を含みます。
Any State
Any Stateは常駐している特殊なステートです。現在どのステートにいるかに影響を受けることなく、特定のステートに遷移したい場合のために存在している。これは、全ステートに同じ遷移先を追加するのと同じ効果がある。Any Stateは、その特殊の機能により、ステートの遷移先とすることはできません。(次の遷移先としてランダムなステートを選択するための手段としてはAny Stateは使用できませんので留意下さい。)

(Animation State Machines に戻る)
Page last updated: 2012-11-26class-Transition
Animation Transitions
Animation Transitionsは、あるAnimation Stateから別のものに切り替えたときに何が起こるかを定義する。任意の時点でアクティブなAnimation Transitionsはひとつのみです。
| Atomic | 遷移がアトミックか(中断が出来ない) |
| Conditions | いつ遷移がトリガーされるか |
Conditionは2つの部分から成ります:
- 条件述語 (If, If Not, Less, Greater, Equals, Not Equal, および Exit Time)
- イベントパラメータ(IfとIf Notでbool型と連動、Exit Timeはtime型を使用)。
- パラメータ値(必要な場合)
2つのアニメーションクリップ間の遷移は、開始値と終了値をドラッグすることによって、重なりを調整することができます。

(Animation State Machines にもどる)
Page last updated: 2012-11-26comp-AssetsGroup
Assetsはゲーム作成で使用するモデル、テクスチャ、サウンドやその他のコンテンツファイルです。
本項ではすべてのAssetsタイプにわたってComponentについて説明します。一般的なAssetsの概要についてはAssets概要ページ を参照のこと。
- オーディオ クリップ
- キューブマップ テクスチャ
- Mesh
- フレア
- class-Font
- マテリアル
- メッシュ
- Movie Textures
- 手順マテリアル アセット
- レンダー テクスチャ
- テキスト アセット
- class-Texture2D
class-AudioClip
Audio Clip は、Audio Source によって使用されるオーディオ データです。 Unity は、モノ、ステレオおよびマルチ チャンネル (8 つまで) のオーディオ アセットをサポートしています。 Unity は、次のオーディオ ファイル形式をサポートしています。 .aif、.wav、.mp3、 .oggおよび次の トラッカー モジュール ファイル形式: .xm、.mod、.itおよび .s3m 。 トラッカー モジュール アセットは、波形プレビューをアセット インポート インスペクタにレンダリングできないこと以外は、Unity のその他のオーディオ アセットと同じ働きをします。

「オーディオ クリップ Inspector」
プロパティ
| Audio Format | ランタイム時に音声に使用される特定の形式。 |
| Native | ファイル サイズが大きくなるにつれ、品質が高くなります。 非常に短い音響効果に最適です。 |
| Compressed | ファイル サイズが小さくなるにつれ、品質が低くなるか、変わりやすくなります。 中程度の長さの音響効果や音楽に最適です。 |
| 3D Sound | 有効にすると、3D スペースで音声が再生されます。 モノとステレオの音声の両方を 3D で再生できます。 |
| Force to mono | 有効にすると、オーディオ クリップが 1 つのチャンネル音声にダウンミックスされます。 |
| Load Type | Unity がランタイムで音声をロードする方法。 |
| Decompress on load | ロード時に音声を解凍します。 オン ザ フライの解凍の性能オーバーヘッドを回避するため、より小さい圧縮音声に使用します。 ロード時の音声の解凍では、メモリ内で圧縮状態を維持する場合の 10 倍以上のメモリを使用するため、大きなファイルには使用しないでください。 |
| Compressed in memory | メモリ内で圧縮状態を維持し、再生時には解凍します。 若干の性能オーバーヘッドが生じるため (Ogg/Vorbis 圧縮ファイルの esp.)、大きいファイルにのみ使用してください。技術的な制約により、このオプションはFMODオーディオを使用するプラットフォーム上でOgg Vorbisについて”Steam From Disc”(下記参照)に切り換わることに注意してください。 |
| Stream from disc | ディスクから直接オーディオ データを流します。これは、メモリの元の音声サイズの一部を使用します。 音楽や非常に長いトラックに使用してください。 一般的に、ハードウェアに応じて、1 ~ 2 の同時ストリームに抑えてください。 |
| Compression | 「圧縮」クリップに適用される圧縮の量。 ファイル サイズに関する統計はスライダの下で確認できます。 スライダをドラッグして、再生を「十分良好」な状態にすべきですが、ファイルや配布上のニーズに見合うよう、十分小さいサイズにしてください。 |
| Hardware Decoding | (iOS のみ) iOS 機器上の圧縮オーディオに使用できます。 解凍時の CPU への負担を減らすため、Apple のハードウェア デコーダを使用します。 詳細については、プラットフォーム固有の詳細を確認してください。 |
| Gapless looping | (Android/iOS のみ) 完全ループのオーディオ ソース ファイル (非圧縮 PCM 形式) を圧縮する際に、そのループを残すために使用します。 標準の MPEG エンコーダは、ループ点周辺にサイレンスを取り込んでいますが、これはちょっとした「クリック」または「ポップ」として再生します。 Unity ではこれは円滑に扱われます。 |
オーディオ アセットのインポート
Unity は「圧縮」と「ネイティブ」オーディオの両方をサポートしています。 どのファイルも (MP3/Ogg Vorbis を除く) 最初は「ネイティブ」としてインポートされます。 ゲーム稼働中、圧縮オーディオ ファイルは CPU によって解凍される必要がありますが、ファイル サイズは小さくなります。 「Stream」にチェックを入れると、オーディオは「オン ザ フライ」で解凍されるか、そうでない場合は、オーディオはロード時に全体的に解凍されます。 ネイティブの PCM 形式 (WAV、AIFF) には CPU への負担を増やすことなく、高い忠実性があるという利点がありますが、作成されるファイルのサイズははるかに大きくなります。 モジュール ファイル (.mod、.it、.s3m..xm) は、極めて低いフットプリントで非常に高い音質を提供できます。
一般的に、「圧縮」オーディオ (またはモジュール) は、BGM や会話などの長いファイルに最適で、非圧縮オーディオは、短い音響効果により適しています。 高圧縮から始めて、圧縮スライダで圧縮の量を弱め、音質の差が著しくなる前後で適切に微調整します。
3D オーディオの使用
オーディオ クリップに「3D 音声」と表示されている場合、このクリップは、ゲームの世界の 3D スペースでの位置をシミュレートするために再生されます。 3D 音声は、音量を減らし、スピーカー間でパンすることで、音声の距離や位置をエミュレートします。 モノとマルチ チャンネルの音声の両方を 3D に配置できます。 マルチ チャンネル オーディオの場合、Audio Source の「Spread」オプションを使用して、スピーカー スペースで個々のチャンネルを拡散および分割します。 Unity は、3D スペースでのオーディオ の動作を制御および微調整するための各種オプションを提供しています。 Audio Source を参照してください。
プラットフォーム固有の詳細

iOS
携帯プラットフォーム上では、解凍時の CPU の負担を減らするため、圧縮オーディオは MP3 として符号化されます。
パフォーマンス上の理由から、オーディオ クリップは、Apple ハードウェア コーデックを使用して再生できます。 これを有効にするには、オーディオ インポータの「ハードウェア デコーディング」チェックボックスにチェックを入れます。 バックグラウンドの iPod オーディオを含む、ハードウェア オーディオ ストリームは 1 回につき、1 つしか回答できません。
ハードウェア デコーダを使用できない場合は、解凍はソフトウェア デコーダで行われます (iPhone 3GS 以降では、Apple のソフトウェア デコーダが Unity(FMOD) 自身のデコーダ上で使用されます)。

Android
携帯プラットフォーム上では、解凍時の CPU の負担を減らするため、圧縮オーディオは MP3 として符号化されます。
class-Cubemap
Cubemap Texture は、架空の6 つの立方体の各面に貼られる個々の正方形のテクスチャの集合です。 このテクスチャは、Skybox が背景の遠くの風景を表示するのと同様、オブジェクトへのかなり遠くの反射を表示するのに使用されます。 Unity のReflective 組み込みシェーダーは、Cubemap を使用して、反射を表示します。

この球体嬢の反射として表示される山のシーンの Cubemap
Cubemap は、次のいずれかの方法で作成します。
- を使用し、そのプロパティを設定して、6 つの Texture アセットを対応する Cubemap の面にドラッグします。 テクスチャは Cubemap アセットにベーキングされ、そのテクスチャとはリンクしなくなるので、変更した場合は、テクスチャを再度適用する必要があります。
- Texture インポート設定を使用して、1 つのテクスチャ アセットから Cubemap を作成します。
- スクリプトから、Cubemap にシーンをレンダリングします。 Camera.RenderToCubemap ページのコード例には、エディタから直接 Cubemap をレンダリングするためのスクリプトが含まれています。
プロパティ
| Right (+X) | Cubemap の面の右グローバル側のテクスチャ。 |
| Left (-X) | Cubemap の面の左グローバル側のテクスチャ。 |
| Top (+Y) | Cubemap の面の上グローバル側のテクスチャ。 |
| Bottom (-Y) | Cubemap の面の下グローバル側のテクスチャ。 |
| Front (+Z) | Cubemap の面の前方グローバル側のテクスチャ。 |
| Back (-Z) | Cubemap の面の後方グローバル側のテクスチャ。 |
| Face Size | 個々の Cubemap の面での幅と高さ (単位: 幅と高さ)。 このサイズにフィットするよう、テクスチャが内部で拡大されます。アセットを手動で拡大する必要はありません。 |
| Mipmap | ミップマップの作成を可能にします。 |
| Format | 作成された Cubemap の形式。 |
class-FBXImporter
3Dモデルをインポートすると、UnityはMeshとして内部的に格納します。MeshはMeshフィルタ コンポーネント を使用してゲームオブジェクトにアタッチする必要があります。Meshを表示できるようにするためには、ゲームオブジェクトにはさらにMeshレンダラ あるいは他の適切なレンダラコンポーネントを持っている必要があります。これらのコンポーネントを使用することで、Meshはレンダラによって使用されるマテリアルどおりの外観で、ゲームオブジェクトの位置に表示されます。
UnityのMeshインポーターはMeshの生成を制御したり、テクスチャやマテリアルに関連付けるため、多くのオプションを用意しています。これらのオプションは、次のページで説明されています。
Page last updated: 2012-11-26FBXImporter-Model
モデルファイルのImport Settingsは、モデルが選択されたときに、FBXインポータインスペクタのModelタブに表示されます。この設定はメッシュや法線マップ、インポートされたマテリアルに影響を与えます。設定はディスク上のアセットごとに適用されますので、別の設定を持ったアセットでが必要な場合は複製したうえで適切にリネームします。
最初はデフォルト設定で十分ですが、ゲームオブジェクトで実現したいことによっては、以下の設定を知る価値がある。
たとえば、よくある設定変更を挙げると:
Scaleは、Unity世界での1単位と3Dモデリングツールの1単位を換算するために使用され、ファイル全体を拡大・縮小します。単位が問題ではない場合は、単に1に設定することができます。
- Generate colliders - モデルが他のオブジェクトと衝突するようにするための衝突メッシュを生成します。以降の注意事項を参照のこと。
- Material Naming、Material Search* -マテリアルを自動的にセットアップ、テクスチャ認識するのに役立ちます。

FBXインポータ インスペクタ:[Model]タブ
| Meshes | ||
| Scale Factor | Unity の物理特性システムは、ゲーム世界での 1 メートルを、インポートされたファイルでの 1 単位と考えます。 異なるスケールでモデリングしたい場合は、ここで修正します。異なる3Dパッケージのデフォルトは次のとおりです。 (.fbx、.max、.jas、.c4d) = 0.01、(.mb、.ma、.lxo、.dxf、.blend、.dae) = 1、(.3ds) = 0.1 | |
| Mesh Compression | この値を上げると、メッシュのファイル サイズが小さくなりますが、異常が生じる可能性があります。 メッシュが解凍後と違いすぎないように、できる限り高い値に設定します。これは、ゲームサイズの最適化 に便利です。 | |
| Read/Write Enabled | ランタイム時にメッシュを書込可能とし、メモリ内でコピーが作成されます。 | |
| Optimize Mesh | このオプションは、三角形メッシュが表示される順序を決定します。 | |
| Generate Colliders | メッシュに自動的メッシュコライダーをアタッチしてインポート。背景の物体の衝突メッシュを簡易にを生成するのに便利、一方で移動しない物体では避けるべきです。詳細情報についてはコライダー を参照してください。 | |
| Swap UVs | Lightmapped オブジェクトが間違えた UV チャンネルを選択した場合に使用します。 これにより、1 次および 2 次 UV チャンネルがスワップされます。 | |
| Generate Lightmap | これを使用し、Lightmapping に使用する UV2 を作成します。 | |
| Advanced Options | ライトマップUV を参照して下さい。 | |
| Normals&Tangents | ||
| Normals | 法線を計算するかどうか、およびどのように計算するかを定義します。これは、ゲームサイズの最適化 に便利です。 | |
| Import | デフォルトのオプション。ファイルから法線をインポートします。 | |
| Calculate | 「Smoothing angle」に基づいて、法線を計算します。選択すると、「Smoothing Angle」が有効になります。 | |
| None | 法線を無効にします。 このオプションは、メッシュがマップされた法線およびリアルタイムのライティングの影響を受けている法線でない場合に使用します。 | |
| Tangents | 接線および従法線を計算するかどうか、およびどのように計算するかを定義します。これは、ゲームサイズの最適化 に便利です。 | |
| Import | ファイルから接線および従法線をインポートします。このオプションは、FBX、Maya および 3dsMax ファイルにのみと、法線がファイルからインポートされる時にのみ使用できます。 | |
| Calculate | デフォルトのオプション。 接線および従法線を計算します。接線および従法線を計算します。このオプションは、法線がインポートまたは計算されるかのいずれかの場合でのみ使用できます。 | |
| None | 接線および従法線を無効にします。メッシュに接線はないため、法線をマップしたシェーダとは使用できません | |
| Smoothing Angle | 端の鋭さの度合いを硬い端として設定します。これは、法線のマップ接線を分割するのにも使用されます。 | |
| Split Tangents | 法線マップライティングがメッシュ上の継ぎ目で分割される場合に、これを有効にします。 通常これはキャラクターにのみ適用されます。通常これはキャラクターにのみ適用されます。 | |
| Materials | ||
| Import Materials | 無効にするとマテリアルが生成されなくなります。デフォルトのDiffuseマテリアルが代わりに使用されます。 | |
| Material Naming | Unityのマテリアルの命名を制御します | |
| By Base Texture Name | Unityのマテリアルを命名する際に基準とするインポートされたマテリアルのDiffuseテクスチャ。Diffuseテクスチャがマテリアルに割り当てられていない場合、Unityはインポートされたマテリアルの名前を使用します。 | |
| From Model's Material | Unityのマテリアルを命名する際に基準となるインポートされたマテリアル。 | |
| Model Name + Model's Material | Unityのマテリアルを命名する際にモデルファイル名とインポートされたマテリアル名の組み合わせが使用されます。 | |
| Texture Name or Model Name + Model's Material (Obsolete) | Unityのマテリアルを命名する際にインポートされたマテリアル名のDiffuseテクスチャが使用されます。Diffuseテクスチャがマテリアルに割り当てられていない場合、Unityはインポートされたマテリアルの名前を使用します。モデル名とモデルのマテリアル名の組み合わせを使用します。このオプションは、Unity 3.4(およびそれ以前のバージョン)の動作と下位互換性があります。推奨設定はBy Base texture Nameであり、もっともシンプルで一貫性のある動作を伴います。 | |
| Material Search | Material Namingオプションで定義された名前でUnityがマテリアルを検索する場所を制御します: | |
| Local | Unityは、マテリアルと同じフォルダ内でのみ(すなわちモデルファイルと同一フォルダ)、同じ名前のマテリアルがあるかチェックします。 | |
| Recursive-Up | Unityは、マテリアルと同じフォルダ内、およびAssetsフォルダに到達するまで、その全ての上位フォルダで、同じ名前のマテリアルがあるかチェックします。 | |
| Everywhere | Unityは、プロジェクト フォルダ全体から同じ名前のマテリアルがあるかチェックします。 | |
FBXImporter-Rig
リグタブでは、インポートしスキニングモデルのアバター定義をアサインあるいは作成し、アニメーションをつけることができます。アセット準備およびインポート を参照してください。
ヒューマノイド キャラクター、すなわち二足歩行で二本の腕と頭がある場合、Humanoidを選択し、Create from this modelを選択することでボーン階層にもっとも適合したアバターを作成できる。アバター作成およびセットアップ を参照のこと。別の方法としては既にセットアップされたアバター定義を選択することも出来ます。
逆に、非ヒューマノイド キャラクター、たとえば四足歩行あるいは何らかアニメーション可能である存在がありメカニム で使用した場合、Generic選択した後、ドロップダウンボックスからいずれかのボーンを選択してルートノードとする必要がある。
Unity 3.x以前ののレガシー アニメーションシステムでインポート、アニメーションを実行したい場合は、Legacyを選択してください。

| Animation Type | アニメーションのタイプ。 |
| None | アニメーションなし |
| Legacy | レガシー アニメーションシステム |
| Generic | ジェネリック メカニム アニメーション |
| Humanoid | ヒューマノイド アニメーションシステム |
| Avatar Definition | アバター定義を取得する場所 |
| Create from this model | アバターの作成元とするモデル |
| Copy from other Avatar | 別モデルでセットアップされたアバターコンフィグ設定を指定 |
| Configure... | アバターコンフィグ設定 を参照のこと |
| Keep additional bones |
FBXImporter-Animations

| Animations | |
| Generation | アニメーションをインポートする方法を制御します |
| Don't Import | アニメーションやスキニングをインポートしません |
| Store in Original Roots | アニメーションをアニメーションパッケージのルートオブジェクトに格納します(Unity上のルートオブジェクトとは異なる場合があります)。 |
| Store in Nodes | アニメーションは、その対象となるオブジェクトと一緒に格納されます。複雑なアニメーションを設定していて、スクリプトで完全に制御したい場合に使用てください。 |
| Store in Root | アニメーションがシーンのトランスフォーム ルートオブジェクトに格納されます。階層構造を持っている何かをアニメーション化する場合に使用して下さい。 |
| Bake Animations | アニメーションパッケージで、IKやシミュレーションを使用している場合オンにして下さい。Unityはインポート時にフォワード キネマティクスに変換します。このオプションは、Maya、3dsMaxとCinema4Dファイルでのみ利用可能です。 |
| Animation Wrap mode | インポートされたメッシュ上のアニメーションのデフォルトのWrap Mode |
| Default | アニメーションを下のアニメーション分割オプションで指定したとおりに再生 |
| Once | アニメーションは一度だけ再生。その後は停止 |
| Loop | アニメーション再生終了後にループ再生 |
| PingPong | アニメーションが再生終了時に、繰り返し逆方向へ再生 |
| ClampForever | アニメーション再生後、最後のフレームが無限に繰り返される。Onceモードでの再生では厳密には最後のフレームまで到達しないため、違いがあります(アニメーションのブレンドの場合などで便利です) |
| Split Animations | あなたが単一のファイルに複数のアニメーションを持っている場合は、複数のクリップに分割することができます。 |
| Name | 分割アニメーションクリップの名前 |
| Start | モデルファイル内のクリップの最初のフレーム |
| End | モデルファイル内のクリップの最後のフレーム |
| WrapMode | 分割された·クリップがアニメーション再生終了後の動作(前述のWrapのオプションと同じ)。 |
| Loop | アニメーションの作成された方法によっては、分割されたクリップが適切にループするためにはアニメーションに追加で1フレーム必要な場合があります。アニメーションのループが正しいように見えない場合は、このオプションを有効にしてみてください。 |
| Animation Compression | |
| Anim. Compression | メッシュのアニメーションに適用される圧縮のタイプ |
| Off | アニメーション圧縮をオフにする。Unityでインポート時にキーフレーム数を削減しません。アニメーション品質は最高になる一方で、パフォーマンス低下、ファイルサイズ増大、およびランタイム メモリーサイズ増大を意味します。一般的に、このオプションの使用は推奨しません。別の方法としては、より精度の高いアニメーションが必要な場合にKeyframe Reductionをオンにして、Animation Compressionの値を許容できるところまで下げます。 |
| Keyframe Reduction | インポート時にキーフレームを減らします。選択した場合は、Animation Compression Errors`オプションが表示されます。 |
| Keyframe Reduction and Compression | インポート時にキーフレームを減らし、ファイルにアニメーションを格納する時にキーフレームを圧縮します。ファイルサイズのみに影響があり、ランタイムのメモリーサイズは、Keyframe Reductionをオンにした場合と同じになります。選択した場合は、Animation Compression Errosオプションが表示されます。 |
| Animation Compression Errors | これらのオプションはKeyframe Reductionがオンの場合のみ使用できます。 |
| Rotation Error | 回転カーブを削減する度合いを定義します。小さい値ほど高いアニメーション品質が得られます。 |
| Position Error | 位置カーブを削減する度合いを定義します。小さい値ほど高いアニメーション品質が得られます。 |
| Scale Error | 大きさカーブを削減する度合いを定義します。小さい値ほど高いアニメーション品質が得られます。 |
class-Flare
Flare オブジェクトは、Lens Flare Components によって使用されるソース アセットです。 フレア自体は、テクスチャ ファイルと、フレアの動作を決定する特定の情報の組み合わせです。 そのため、Sceneでフレアを使用したい場合は、GameObject に追加された LensFlare Component 内から特定のフレアを参照します。
Standard Assets パッケージにフレアのサンプルが置いてあります。 シーンにこれらのいずれかを追加したい場合は、 Lens Flare コンポーネントを GameObject に追加し、Material を Mesh Renderer に割り当てるのと同様、使用したいフレアをレンズ フレアのFlareプロパティにドラッグします。

フレア Inspector'
フレアは、単一の Texture にあるいくつかのフレア要素を含めることで機能します。 フレア内で、テクスチャのいずれかから含めたい要素をつかんで、選択します。
プロパティ
| Elements | フレア内に含まれたフレア画像の数。 |
| Image Index |この要素に対して、Flare Textureから使用されるフレア画像。 詳細については、下のFlare Textures を参照してください。 | |
| Position | 画面中央を通る含まんでいる GameObject の位置からくる線にそった要素のオフセット。 0 = GameObject position, 1 = screen center. |
| Size | 要素のサイズ。 |
| Color | 要素の色。 |
| Use Light Color | フレアを光に追加する場合、これを有効にすることで、その光の色がフレアに付きます。 |
| Rotate | 有効にすると、要素の下部が常に画面の中央を向き、レンズ フレアが画面周辺で回転すると共に、要素を回転させます。 |
| Zoom | 有効にすると、表示した時に要素が拡大し、非表示にすると、再度縮小します。 |
| Fade | 有効にすると、表示した時に要素がフェードインし、非表示にすると、フェードアウトします。 |
| Flare Texture | このフレアの要素によって使用される画像を含むテクスチャ。 TextureLayoutオプションのいずれかに応じて配置する必要があります。 |
| Texture Layout | Flare Texture内での個々のフレア要素画像の配置のされ方。![]() |
| Use Fog | 有効にすると、フレアは遠くにある霧と共にフェードアウェイします。 一般に小さいフレアに使用されます。 |
詳細
フレアは、線にそって配置された複数の要素で構成されます。 この線は、レンズ フレアを含む GameObject の位置と、画面の中心を比較することで計算されます。 線は、含んでいる GameObject と画面の中心を超えて伸びます。 フレア要素はすべてこの線上で伸びます。
フレア テクスチャ
パフォーマンス上の理由から、1 つのフレアのすべての要素は、同じテクスチャを共有する必要があります。 このテクスチャには、1 つのフレア内の要素として使用できる様々な画像が含まれます。 テクスチャ レイアウトは、フレア テクスチャ内で要素がどのように配置されるかを定義します。
テクスチャ レイアウト
異なるフレアテクスチャ レイアウトに対するオプションです。 画像内の数字は、各要素に対する画像インデックスプロパティに対応しています。
| 1 Large 4 Small | ![]() 要素の 1 つに他の要素よりも高い忠実度を与える必要がある大きい宋式のフレア向けに設計されています。 幅と高さが 2 倍のテクスチャで使用するよう設計されています。 |
| 1 Large 2 Medium 8 Small | ![]() 1 つの高解像度画像、2 つの中サイズ画像と 8 つの詳細図画像を必要とする複雑なフレア向けに設計されています。 これは、2 つの中サイズの要素が虹色の円である標準の50mm Zoom Flareアセットで使用されます。 幅と高さが 2 倍のテクスチャで使用するよう設計されています。 |
| 1 Texture | ![]() 1 つの画像。 |
| 2x2 grid | ![]() シンプルな 2x2 グリッド。 |
| 3x3 grid | ![]() シンプルな 3x3 グリッド。 |
| 4x4 grid | ![]() シンプルな 4x4 グリッド。 |
ヒント
- 異なる多くのフレアを使用する場合、すべての要素を含む 1 つのフレア テクスチャを使用すると、最高のレンダリング パフォーマンスが得られます。
- レンズ フレアは、Colliders にブロックされます。 フレア GameObject とカメラ間のコライダが Mesh Renderer を持たない場合でも、コライダはフレアを非表示にします。
class-Font
フォント
Font GUI Text または Text Mesh Component のいずれかで使用するするに、作成またはインポートできます。
True Type Font ファイル (.ttf) のインポート
プロジェクトにフォントを追加するには、Assets フォルダに .ttf ファイルを置く必要があります。 Unity が、自動的にフォントをインポートします。 そのフォントに拡張子 .ttf があるか確認してください。ない場合、Unity はフォントを認識しません。
フォントの「サイズ」を変更するには、Project View で強調表示すると、Inspector の Import Settings に多くのオプションが表示されます。

「フォント用のインポート設定」
| Font Size | ワード プロセッサに設定されたサイズに基づいた、フォントのサイズ。 |
| Character | フォントのテキスト エンコーディング ここでは、フォントを大文字または小文字のみで表示させることができます。 |
| このモードを Dynamic に設定すると、ユーティリティがOS の基本フォント レンダリング ルーチンを使用するようになります (下記参照)。 | |
| 2.x font placing | Unity 3.x が、2.x よりも、印刷的により正しいフォントの縦配置を使用します。 フォント テクスチャをレンダリングする際に計算する代わりに、True Type フォントに保存されたフォント アセントを使用します。 このプロパティにチェックを入れると、2.x の縦配置が使用されます。 |
「非 Dynamic フォントに固有のインポート設定」
| Font Rendering | フォントに適用されるアンチエイリアス処理の量。 |
「Dynamic フォントに固有のインポート設定」
| Style | フォントに適用されるスタイルで、通常、太字、斜体、太字と斜体のいずれか。 |
| Include Font Data | Dynamic フォント プロパティと併用される際に、この設定がフォントのパッケージングを制御します。 選択すると、TTF がビルドの出力に含まれます。 選択しないと、エンド ユーザーがマシンにインストール済みのフォントを持っているとみなされます。 フォントは著作権の対象となるため、使用許諾を受けたフォントか、自身で作成してフォントのみ使用してください。 |
| Font Names | [Include Font Data] を選択していない場合にのみ使用できます。 コンマで区切られた、フォント名のリストが表示されます。 これらのフォントは、左から右へと順に試行され、ゲーマー マシンで最初に検出されたフォントが使用されます。 |
フォントをインポート後に、プロジェクト ビューでそのフォントを展開し、自動生成されたアセットがあるかを確認できます。 次の2 つのアセットはインポート中に作成されます。 「フォント マテリアル」と「フォント テクスチャ」
Dynamic フォント
Unity 3.0 は、Dynamic フォントのレンダリングのサポートを追加します。 インポート設定の「Characters」ドロップダウンリストを「Dynamic」に設定すると、Unity はすべてのフォント キャラクターを持つテクスチャを事前に生成しません。 代わりに、OS に組み込まれたフォント レンダリングを使用して、オン ザ フライでテクスチャを作成します。 これには、特にユーザーシステムに一般に含まれるフォントの使用しているために、フォント データを含める必要がない場合やアジア言語や大きいフォント サイズ (通常のフォント テクスチャを使用するので、フォント テクスチャが非常に大きくなる)をサポートする必要がある場合に、ダウンロードのサイズやテクスチャ メモリに保存できるという利点があります。
Unicode サポート
Unity はUnicode を完全にサポートしています。 Unicode のテキストにより、通常 ASCII 文字セットでサポートされていないドイツ語、フランス語、デンマーク語、日本語の文字を表示することができます。 また、フォントがサポートしている場合に、矢印やオプション キーなどの多くの異なる特殊用途文字を入力することができます。
Unicode 文字を使用するには、インポート設定の「Characters」ドロップダウンから「Unicode」か「Dynamic」のいずれかを選択します。 このフォントで Unicode 文字を表示できます。 GUIText または Text Mesh を使用している場合、インスペクタにあるコンポーネントの「Text」フィールドに Unicode 文字を入力できます。 Mac 上のインスペクタでは、Unicode 文字が正しく表示されない場合があります。
また、スクリプトから表示されるテキストを設定したい場合、Unicode 文字も使用できます。 Javascript と C# コンパイラは、Unicode ベースのスクリプトを完全にサポートしています。 UTF-16 エンコーディングでスクリプトを保存するだけでかまいません。 Unitron で、 を選択することで、この操作を行うことができます。 Unicode 文字をスクリプトの文字列に追加すると、UnityGUI や、GUI テキスト、またはテキスト メッシュで期待されるように表示されます。 UniSciTE がスクリプト の編集に使用される PC で、UCS-2 Little Endian エンコーディングを使用して、スクリプトを保存します。
フォント カラーの変更
フォントの使用方法に応じて、表示フォントの色を変更する方法がいくつかあります。
GUI テキストとテキスト メッシュ
GUI テキストまたはテキスト メッシュを使用して、フォントに対してカスタムの Material を使用して、その色を変更できます。 プロジェクト ビューで、 をクリックし、Inspector で新規作成したマテリアルを選択および設定します。 フォント アセットからそのマテリアルにテクスチャを割り当ててください。 フォント マテリアルに組み込みの「GUI/Text Shader」シェーダーを使用することで、マテリアルの「テキスト カラー」で色を選択できます。
UnityGUI
UnityGUI スクリプティングを使用して、フォントする場合、異なる環境で、フォントの色をより制御できます。 フォントの色を変更するには、 から GUISkin を作成し、 などから、特定の制御状態に色を定義します。 詳細については、GUI Skin page を参照してください。
ヒント
- インポートされたフォントwを表示するには、フォントを選択し、 を選択します。
- 小文字または大文字のみ使用することで、生成されたテクスチャ サイズが小さくなります。
- Unity が提供するデフォルトのフォントは、Arial になります。 このフォントは、常に使用可能で、プロジェクト ビューには表示されません。
class-Material
マテリアルは、GameObject に追加された Mesh または Particle Renderers と併せて使用できます。 マテリアルは、オブジェクトの表示のされ方を定義する上で重要な役割を担っています。 マテリアルには、Mesh または Particlesのレンダリングに使用される Shader への参照が含まれているため、これらのコンポーネントは、一部のマテリアルなしでは表示できない場合があります。

拡散シェーダ マテリアルには、色とテクスチャの 2 つのプロパティしかありません。
プロパティ
マテリアルのプロパティは、選択したシェーダによって変わります。 下記のプロパティは、最もよく使用されるプロパティです。
| Shader | マテリアルによって使用されるシェーダ。 詳細については、Built-in Shader Guide を参照してください。 |
| Main Color | 適用できる色合い。 白の場合、色合いは付きません。 |
| Base | 表示される Texture。 |
詳細
マテリアルを使用して、Textures を GameObject に置くことができます。 マテリアルなしでテクスチャを直接追加することはできません。追加しても、暗黙のうちに新しいマテリアルが作成されます。 適切なワークフローは、マテリアルを作成し、シェーダを選択、それと共に表示するテクスチャ アセットを選択することです。 マテリアルの詳細については、Materials に関するマニュアルのページを参照してください。
シェーダの選択
マテリアル作成後は、まずどのシェーダを使用するかを決めます。 ドロップダウンのShaderメニューから選択します。

Shaderドロップダウン メニュー
プロジェクトの Assets フォルダにあるシェーダか、または組み込みのシェーダを選択できます。 自身でシェーダを作成することもできます。 組み込みシェーダの使用の詳細については、Built-in Shader Guide を参照してください。 シェーダの作成の詳細については、マニュアルの Shaders および ShaderLab Reference を参照してください。
シェーダ プロパティの設定
選択したシェーダのタイプに応じて、各種プロパティが Inspector に表示されます。

反射鏡シェーダのプロパティ

法線マップ シェーダのプロパティ

法線マップ反射鏡シェーダのプロパティ
シェーダには次の各種プロパティがあります。
| Color pickers | 色を選択するのに使用されます。 |
| Sliders | 許容範囲上で数値を微調整するのに使用されます。 |
| Textures | テクスチャの選択に使用されます。 |
テクスチャの貼り付け
OffsetとTilingプロパティを変更することで、テクスチャの貼り付けを変更できます。

Tilingプロパティを変更することで、このテクスチャには 2x2 倍のタイルが貼られます
| Offset | テクスチャをスライドします。 |
| Tiling | 異なる軸に沿ってテクスチャにタイルを張ります。 |
ヒント
- できるだけ多くの GameObject 上で、1 つのマテリアルを共有する方がいいでしょう。 これには、非常に大きなパフォーマンス上の利点があります。
class-Mesh
3D ワールドの大半は、Mesh で構成されています。Asset Storeラグインを除けば、Unityにはモデリングツールが含まれていません。しかしUnityはほとんどの3Dモデリングパッケージと強力な相互関係性を持っています。Unityは三角、四角ポリゴンのメッシュをサポートしています。NURBS、NURMS、サブディビジションサーフェイスは、ポリゴンに変換する必要があります。

3Dフォーマット
Unityにメッシュをインポートするには主に2つのファイルからできます:
- エクスポートされた3Dファイルフォーマット。例えば .FBX あるいは .OBJ
- 3Dアプリケーションの専用ファイル、たとえば
.Maxまたは.Blendなど3D Studio Maxか、Blenderのファイル形式をサポートします。
どちらでもUnityにメッシュに取り込むことができますが、どちらを選ぶかにあたって考慮事項があります。
エクスポートされた3Dファイル形式
Unityは.FBX 、.dae (Collada)、.3DS、.dxf および.obj、FBXエクスポータファイルを読み込むことができ、FBXエクスポータはここ 多くのアプリケーションでobjやColladaのエクスポータを見つけることが出来ます。
長所
- 必要なデータのみエクスポート
- 検証可能なデータ(Unityに持っていく前に3Dパッケージに再インポートできます)
- 一般的にファイルサイズが小さい
- モジュール単位のアプローチを推進できる(例. Collisionの型や相互関係性ごとにことなるコンポーネント)
- 直接サポートしていないその他の3Dパッケージで独自形式があっても、この形式にすればサポートできます
短所
- プロトタイプや反復作業としては時間のかかるパイプラインとなります
- ソース(作業ファイル)とゲームデータ(例えば、FBXでエクスポート)の間のバージョン管理を見失いないがちになります
独自の3Dアプリケーションファイル形式
Unityが変換を通してインポートできるファイル形式: Max、 Maya、Blender、Cinema4D、Modo、Lightwave、Cheetah3D、たとえば.MAX、.MB、.MAなど
長所
- 反復作業の時間がかからない(ソースファイルを保存すればUnityが自動で再インポート)
- 最初の作業がシンプル
短所
- Unityプロジェクトを使用しているすべてのマシンに当該ソフトウェアのライセンスをインストールする必要があります
- ファイルは、不要なデータにより肥大化することがありえます
- 容量の大きいファイルはUnityの更新を遅らせます
- 検証が少ないデータ (問題のトラブルシューティングが困難)
サポート対象の3Dアプリケーションのためのガイドラインが次のとおりありますが、ほとんどの場合先に示したファイル種類をエクスポートすることができます。
テクスチャ
Unityでメッシュが使用するテクスチャは次のルールに従って発見されます。最初に、インポータはメッシュと同じフォルダ、あるいはその親フォルダにTexturesという名前のサブフォルダがあるかチェックします。見つけられない場合は、プロジェクト内のすべてのテクスチャを全検索します。若干遅いことのほか、全探索の主な欠点は、同じ名前のテクスチャがプロジェクト内に2つ以上あるかもしれないということです。この場合、正しいものが見つかる保証がありません。

アセットと同じ階層、あるいは親の階層にあるTexturesフォルダにテクスチャを格納する
マテリアルの生成および割り当て
Unityでインポートされた各マテリアルは次のルールに従って処理されます。
マテリアルの生成が無効になっている場合(つまりImport Materialsがオフ)、デフォルトのDiffuseマテリアルを割り当てます。有効になっている場合、次の処理を行います。
- UnityがMaterial Search設定にしたがってUnityマテリアルの名前を選択する
- Unityは、この名前を持つマテリアルを検索します。マテリアルの検索範囲はMaterial Search設定によって定義されます。
Unityは、マテリアルを見つけると、それをインポートしたシーンで使用しますが、見つけられない場合はマテリアルを作成します。
コライダ
Unity は、 Mesh Colliders と Primitive Colliders の 2 つの基本的なコライダを搭載しています。メッシュ コライダは、インポートされたメッシュ データを使用し、環境との衝突に使用できるコンポーネントです。インポート設定で、Generate Colliders を有効にすると、メッシュをシーンに追加すると、メッシュ コライダは自動的に追加されます。物理挙動としては、固体として扱われます。
オブジェクトを動かしている場合 (車など)、メッシュ コライダを使用できません。 代わりに、プリミティブ コライダを使用する必要があります。 この場合、「Generate Colliders」設定を無効にする必要があります。
アニメーション
シーンから自動的にアニメーションがインポートされます。アニメーションのインポート オプションの詳細については、Animation Import の章を参照してください。
!法線マッピングとキャラクター
モデルの高いポリゴン バージョンから生成された法線マップのあるキャラクターがある場合、「Smoothing angle」が 180°のゲーム品質バージョンをインポートする必要があります。 これは、接線の分割により、ライティングで変な見た目の継ぎ目が生じるのを防ぎます。 これらの設定でまだ継ぎ目が残っている場合は、「Split tangents across UV seams」を有効にします。
グレースケールの画像を法線マップに変換する場合は、これを機にする必要はありません。
ヒント
- できるだけ多くのメッシュを結合します。メッシュにマテリアルとテクスチャを共有させます。 これには、非常に大きなパフォーマンス上の利点があります。
- Unity で更にオブジェクトを設定する必要がある場合 (物理特性、スクリプトまたはその他)、あとで苦労しないように3D アプリケーションでオブジェクトに適切な名前をつけます。 「pCube17」または「Box42」のようにネーミングしたオブジェクトをたくさん作成すると苦労するだけです。
- 3D アプリケーションで、世界の原点の中心にメッシュを配置させます。これにより、Unity により簡単にメッシュを配置できます。
- メッシュが頂点色を持っていない場合、Unityは最初のレンダリング処理で自動的にメッシュに白い頂点色の配列を追加します。
Unity エディタに表示される頂点や三角形が多すぎる場合(3D アプリと比較して)があります。
これは実際には正しいです。あなたが見ているのは、レンダリングのために実際に OpenGLES に送信される三角形の数です。マテリアルの要件により三角形を 2 回送信する必要がある場合に加え、ハードエッジな法線や非連続UV は、モデリング アプリケーションと比較した場合に、頂点 / 三角形の数を大幅に増やします。細長い線を形成するには、三角形が 3D および UV 空間で連続している必要があるため、UV の継ぎ目がある場合は、劣化した三角形を作成し、細長い線を形成する必要がありますが、これにより三角形の数は増加します。
関連項目
Page last updated: 2012-11-26class-MovieTexture
注意: Unity Pro/ Advancedのみ
デスクトップ!
Movie Texturesは、ビデオファイルから作成され、アニメーション化されたTextureです。
プロジェクトのに動画ファイルを配置することによって、通常使用するTexture とまったく同じように使用できるビデオをインポートすることができます。
動画ファイルはApple社のQuickTimeを介してインポートされます。サポートされるファイルの種類はインストールされたQuickTimeのがサポートするものと一致します(通常は .mov、 .mpg、 .mpeg、 .mp4、 .avi、 .asf)Windows上でムービーがインポートされるためにはQuickTimeがインストールされていることが必要です((ここ ) からダウンロード)
プロパティ
Movie Textures Inspectorは、通常のTexture Inspectorと非常によく似ています。

Unity上でビデオファイルから生成したMovie Textures
| Aniso Level | 急な角度から眺めたときのTexture品質を向上させます。床や地面のTextureに良い。 |
| Filter Mode | Textureが3Dへの変換によりストレッチされたときのフィルタ処理を選択 |
| Loop | オンの時、ムービー再生終了後にループ |
| Quality | Ogg Theoraビデオファイルの圧縮。より高い値により、品質がより高い一方でファイルサイズは大きくなる。 |
詳細
ビデオファイルがプロジェクトに追加されると、自動的にインポートされ、Ogg Theora形式に変換されます。一度Movie Texturesがインポートすると、通常のTextureのように、任意のGameObjectまたはMaterialにアタッチできます。
ムービーを再生
ゲームの実行開始時に、Movie Texturesは自動再生されません。再生を指示するスクリプトを準備する必要があります。
// このコードによりMovie Texturesが再生されます renderer.material.mainTexture.Play();
スペースが押されたときに動画再生をプレイバックに切り替えるためには、次のスクリプトをアタッチします。
function Update () {
if (Input.GetButtonDown ("Jump")) {
if (renderer.material.mainTexture.isPlaying) {
renderer.material.mainTexture.Pause();
}
else {
renderer.material.mainTexture.Play();
}
}
}
Movie Texturesを再生する方法の詳細については、Movie Textures スクリプトリファレンス を参照してください。
ムービーオーディオ
Movie Texturesをインポートすると、映像とともにオーディオトラックもインポートされます。オーディオは、Movie TexturesのAudioClipを子オブジェクトとして表示されます。

ビデオのオーディオトラックはProject ViewにてMovie Texturesの子オブジェクトとして表示されます
このオーディオを再生するには、他のオーディオクリップのように、ゲームオブジェクトにアタッチする必要があります。Project Viewから、シーンビューか階層ビューの任意のゲームオブジェクトへドラッグします。
通常、ムービーを見せているのと同じゲームオブジェクトになります。次に audio.Play() を使用し、映像に合わせてオーディオトラックを再生します。

iOS
Movie Texturesは、iOS上ではサポートされません。代わりに、Handheld.PlayFullScreenMovie を使用してフルスクリーン ストリーミング再生が提供される。
プロジェクト ディレクトリのStreamingAssetsフォルダ内にビデオを格納する必要があります。
iOSデバイス上で正しく再生できるファイルタイプはUnityのiOSでサポートされるため、(.mov, .mp4, .mpv, and .3gp )の拡張子や、次の圧縮規格はサポートされます:
- H.264 ベースライン プロファイルレベル3.0 ビデオ
- MPEG-4 Part 2 ビデオ
サポートされている圧縮規格の詳細については、iPhone SDKを参照してください。 MPMoviePlayerController クラスリファレンス
iPhoneUtils.PlayMovie あるいは iPhoneUtils.PlayMovieURL をコールすると画面は現在のコンテンツから、指定された背景色にフェードアウトします。ムービーが再生できる状態になるまでに、少し時間がかかるかもしれませんが、その間プレイヤーは背景色が表示され続けるとともにムービーのロード時間の進行状況インジケータを表示することもできます。再生が終了すると、画面は元のコンテンツに戻るためにフェードバックします。
ビデオプレーヤーは、ビデオ再生時のミュート切替は無視します
すでに書いたように、ビデオファイルはAppleの埋め込みプレーヤーを使用して再生されます。(SDK 3.2およびiPhone OS 3.1.2およびそれ以前のバージョン)このプレーヤーにはバグが含まれており、Unityではミュートに切替えることが出来ません。
ビデオプレーヤーは、デバイスの向きを無視します
アップル社ビデオプレーヤーとiPhone SDKはビデオの向きを調整する方法を提供していません。一般的なアプローチは、手動で各ムービーの複製を2つ、ランドスケープやポートレートの向きで、作成することです。これにより、デバイスの向きをプレイバック前に判定することで、正しいムービーを選択して再生することができる。

Android
Movie Texturesは、Android上ではサポートされません。代わりに、Handheld.PlayFullScreenMovie を使用してフルスクリーン ストリーミング再生が提供される。
プロジェクト ディレクトリのStreamingAssetsフォルダ内にビデオを格納する必要があります。
Androidデバイス上で正しく再生できるファイルタイプはUnityのAndroidでサポートされるため、(.mp4, and .3gp )の拡張子や、次の圧縮規格はサポートされます:
- H.263
- H.264 AVC
- MPEG-4 SP
ただし、デバイスベンダーによりこのリストのサポート範囲は拡大しており、Androidの再生フォーマットが再生できるようなっているたえm、いくつかのAndroid端末はHD動画など、のフォーマットを再生することができます。
サポートされている圧縮規格の詳細については、Android SDKを参照してくださいコアメディアフォーマットのドキュメント
iPhoneUtils.PlayMovie あるいは iPhoneUtils.PlayMovieURL をコールすると画面は現在のコンテンツから、指定された背景色にフェードアウトします。ムービーが再生できる状態になるまでに、少し時間がかかるかもしれませんが、その間プレイヤーは背景色が表示され続けるとともにムービーのロード時間の進行状況インジケータを表示することもできます。再生が終了すると、画面は元のコンテンツに戻るためにフェードバックします。
class-ProceduralMaterial
手順マテリアル アセットは、ランタイムで生成されるテクスチャです。 詳細については、ユーザー ガイドの Procedural Materials を参照してください。 手順マテリアル アセットは、1 つまたは複数の手順マテリアルを含むことができます。 これらは、通常のマテリアル同様、インスペクタで表示できます。 しかし、手順マテリアルには、多くの場合、微調整可能なパラメータが多く含まれています。 マテリアル アセット同様、インスペクタは、ウィンドウの下部に手順マテリアルのプレビューを表示します。

インスペクタに表示される手順マテリアル
インスペクタのウィンドウは次の 4 つのペインで構成されます。
- サブスタンス アーカイブ マネージャ
- プロパティ
- 生成テクスチャ
- プレビュー
サブスタンス アーカイブ マネージャ
アーカイブ ビューには、手順マテリアルが含むすべての手順マテリアルが表示されます。 プレビューの行から対象の手順マテリアルを選択します。 + および - ボタンを使用して、手順マテリアル アセットに手順マテリアルを追加または削除できます。 手順マテリアルを追加すると、アーカイブで符号化されたプロトタイプを使用して、新しいマテリアルが作成されます。 3 つ目のDuplicateボタンにより、設定をすべて含む、現在選択されている手順マテリアルのコピーである新しい手順マテリアルが作成されます。 マテリアル境界名フィールドに新しい名前を入力することで、手順マテリアルの名前を変更できます。
プロパティ
マテリアル プロパティ
これらは、どのシェーダを選択したかに基づいた、マテリアルの通常のプロパティです。 通常のマテリアルに対するのと同じ働きをします。
手順プロパティ
手順マテリアルのプロパティは、手順マテリアルがどのように作成されたかによって変わります。
| Generate at Load | シーンを読み込む際にサブスタンスを生成します。 無効にすると、スクリプティングから促された時にのみ生成されます。 |
| Random Seed | 手順マテリアルは、ある程度のランダム性を必要とする場合が多くあります。 Random Seed は、生成された外観を帰るのに使用できます。 これは多くの場合、ゼロになります。 Randomizeボタンをクリックすると、別のシードを取得し、マテリアルの変わる様子を確認できます。 |
# 生成テクスチャ

生成テクスチャペイン
このエリアでは、手順マテリアルが生成するテクスチャを表示できます。 生成されたテクスチャのそれぞれの下にあるドロップダウンにより、テクスチャに対してアルファ チャンネルを提供するテキスチャ マップを選択できます。 例えば、ベース アルファが Transparency 画像または Specular 画像のいずれかからくるよう指定できます。 下のスクリーンショットは、Specular 画像から来ているベース アルファ チャンネルを示しています。
プラットフォームごとの無効
異なるプラットフォームを作成する場合、対象のプラットフォームに対するテクスチャの解像度やサイズ、画質を考慮する必要があります。 展開しているプラットフォームに応じて、これらのオプションを無効にし、特定の値を割当てることができます。 無効にする値を選択しない場合、プロジェクト作成時にエディタはデフォルトの値を選択します。
| Target Size | 生成されたテクスチャの対象サイズ。 ほとんどの手順テクスチャは、独立した解像度になるよう設計され、選択した対象サイズを考慮しますが、代わりに固定のサイズを使用し、考えられるサイズを一定の範囲内に制限することはめったにありません。 生成されたテクスチャの実際のサイズは、インスペクタの下部にあるプレビューで確認できます。 |
| Texture Format | 一旦生成されると、メモリ内のテクスチャに対して使用される内部表示。 サイズと画質間でのトレードオフとなります。 |
| Compressed | 圧縮された RGB テクスチャ。 これにより、浪費されるメモリ量が大幅に減ります。 |
| RAW | 非圧縮の》トゥルーカラーで、最高画質になります。 256x256 テクスチャの場合は、256 KB。 |
プレビュー
手順マテリアル プレビューは、マテリアル プレビューと同じ方法で動作します。 しかし、通常のマテリアル プレビューとは違い、生成されたテクスチャのピクセル寸法を表示します。
Page last updated: 2012-11-13class-RenderTexture
Render Texture は、ランタイムで作成および更新される特殊な Texture です。 使用するには、まず新しいレンダー テクスチャを作成し、Cameras の 1 つを指定して、そこにレンダリングします。 これで、通常のテクスチャのように、Material 内のレンダー テクスチャを使用できます。 Unity 標準アセットの Water プレハブは、リアルタイムの反射と屈折を作成するために、レンダー テクスチャを現実世界で使用する例です。
レンダー テクスチャは、Unity Pro の機能です。
プロパティ
レンダー テクスチャ Inspector は、ほとんどのインスペクタとは異なりますが、Texture Inspector に非常に似ています。

レンダー テクスチャ インスペクタ は、テクスチャ インスペクタとほぼ同じです
レンダー テクスチャ インスペクタは、リアルタイムでの現在のレンダー テクスチャの内容を表示し、レンダー テクスチャを使用する
| Size | レンダー テクスチャのサイズ (単位: ピクセル)。 2 のべき乗の値のサイズが選択できることに注目してください。 |
| Aniso Level | 鋭角でテクスチャを表示する際に、テスクチャの質を高めます。 床や地面のテクスチャに適しています。 |
| Filter Mode | 3D 変形で伸長される際に、テクスチャをどのようにフィルタリングするかを選択します。 |
| No Filtering | テクスチャがすぐ近くでむらになります。 |
| Bilinear | テクスチャがすぐ近くでぼやけます。 |
| Trilinear | Bilinear と同じですが、テクスチャも異なるミップ レベル間でぼやけます。 |
| Wrap Mode | テクスチャがタイルを貼った時にどのように動作するかを選択します。 |
| Repeat | テクスチャが自身で (タイルを) 繰り返します。 |
| Clamp | テクスチャの縁が伸長します。 |
例
以下の手順で、ゲーム内で、ライブ アリーナ カメラを非常に素早く作成できます。
- を使用して、レンダー テクスチャを新規作成します。
- を使用して、カメラを新規作成します。
- 新しいカメラの「Target Texture」にレンダー テクスチャを割り当てます。
- 幅と高さがある、厚みのない箱を作成します。
- レンダー テクスチャをそこにドラッグして、レンダー テクスチャを使用するマテリアルを作成します。
- 再生モードに移り、箱のテクスチャが、新しいカメラの出力に基づいて、リアルタイムで更新されるのに注目してください。

レンダー テクスチャは上記のように設定されます
ヒント
- Unity は、RenderTexture.active に割り当てられたテクスチャ内のすべてをレンダリングします。
class-TextAsset
Text Asset は、インポートされたテキスト ファイルのための形式です。 プロジェクト フォルダにテキストをドロップすると、テキスト アセットに変換されます。 以下のファイル形式がサポートされています。
- .txt
- .html
- .htm
- .xml
- .bytes

テキスト アセット Inspector
プロパティ
| Text | 1 つの文字列としての、アセットの完全なテキスト。 |
詳細
テキスト アセットは、非常に特殊な使用事例です。 ゲームの作成中に、別のテキストからのテキストをゲーム内に挿入するのに非常に便利です。 簡単な .txt ファイルを記述し、テキストをゲーム内に簡単に挿入できます。 ランタイムでのテキスト ファイル生成のためのものです。 これを行うには、従来の入出力プログラミング技術を使用して、外部ファイルを読み書きする必要があります。
以下の 2 つのシナリオを考えてください。 あなたは、従来のテキストの多いアドベンチャー ゲームを作成しています。 制作を簡単にするため、ゲームのテキストをすべて別々の部屋に分割したいと思っています。 この場合、1 つの部屋で使用されるすべてのテキストを含む 1 つのテキスト ファイルを作成します。 そこから、入る部屋に対して正しいテキスト アセットを参照するのは簡単です。 次に、カスタマイズされた構文解析ロジックで、大量のテキストを非常に簡単に管理できます。
バイナリ データ
テキスト アセットの特殊機能は、テキスト アセットはバイナリ データを格納できるということです。 ファイルに拡張子.bytesを与えると、テキスト アセットとしてロードされ、bytes プロパティを通じてアクセスできます。
例えば、jpeg ファイルを Resources フォルダに入れ、拡張子を .bytes に変え、次のスクリプト コードを使用して、ランタイムでデータを読み込みます。
//ディスクからのテクスチャのロード
TextAsset bindata= Resources.Load("Texture") as TextAsset;
Texture2D tex = new Texture2D(1,1);
tex.LoadImage(bindata.bytes);
ヒント
- テキスト アセットは、ビルド内のその他すべてのアセットのように直列化できます。 ゲームのパブリッシュ時に含まれる物理テキスト ファイルはありません。
- テキスト アセットは、ランタイムでのテキスト ファイル生成に使用するためのものではありません。
class-Texture2D
テクスチャ 2D
Texture により、Mesh、Particle やインターフェースがより活気付きます。 これらは重ねたり、オブジェクト周辺にラップする画像やムービー ファイルになります。 これらは非常に重要であるため、多くのプロパティを有しています。 初めてこれをレンダリングする場合は、Details に移動し、参照が必要な場合は、実際の設定に戻ります。
オブジェクトに使用するシェーダが必要なテクスチャに関する要件を追加しますが、画像ファイルをプロジェクト内に置くことができるということが基本的な原理です。 サイズ要件 (以下に記載) を満たすと、インポートされ、ゲームでの使用向けに最適化されます。 これにより、マルチ レイヤー Photoshop または TIFF ファイルに拡張され、インポート時に平坦化されるため、ゲームに対するサイズのペナルティはありません。
プロパティ
Texture Inspector は、その他ほとんどのものと見た目が若干ことなります。
上の部分には幾つかの設定が、下の部分には、Texture Importer とテクスチャ プレビューが含まれます。
テクスチャ インポータ
テクスチャはすべて Project Folder 内の画像からきます。 どのようにインポートされるかは、Texture Importer によって指定されます。 Project View でファイル テクスチャを選択し、Inspector で Import Settings を編集することで、テクスチャを変更します。
インスペクタの最上位にあるアイテムが メニューでありソース画像ファイルから作成するテクスチャのタイプを選択できます。
| Texture Type | テクスチャの目的に応じて、これを選択して、基本的なパラメータを設定できます。 |
| Texture | 一般に、すべてのテクスチャに使用できる最も一般的な設定です。 |
| Normal Map | これを選択すると、色をリアルタイムの通常マッピングに適した形式に変換させます。 詳細については、下の Normal Maps を参照してください。 |
| GUI | テクスチャを HUD/GUI Control で使用する場合にこれを使用します。 |
| Reflection | 別名キューブ マップ。テクスチャ上での反射を作成するのに使用します。詳細については、Cubemap Textures を参照してください。 |
| Cookie | ライトの Cookie に使用する基本パラメータでテキスチャを設定します。 |
| Advanced | テクスチャに特定のパラメータを設定し、テクスチャを完全に制御したい場合にこれを選択します。 |

「選択された基本テクスチャ設定」
| Alpha From Grayscale | 有効にすると、アルファ透過チャンネルが画像の既存の明るさと暗さの値で生成されます。 |
| Wrap Mode | テクスチャをタイルしたときの処理 |
| Repeat | テクスチャを繰り返しタイル |
| Clamp | テクスチャの端をストレッチ |
| Filter Mode | テクスチャが3Dへの変換によりストレッチされたときのフィルタ処理を選択 |
| Point | テクスチャを近くでみたときにブロック状になります |
| Bilinear | テクスチャを近くでみたときにぼやけます |
| Trilinear | Bilinearと同じですが、テクスチャはミップマップ間でもぼやけます |
| Aniso Level | 急な角度から眺めたときのテクスチャ品質を向上させます。床や地面のテクスチャに良い。 以下 を参照して下さい。 |

テクスチャインポータの法線マッピング設定
| Create from Greyscale | オンにした場合Bumpiness、Filteringオプションが表示されます |
| Bumpiness | バンプの強度を制御します |
| Filtering | バンプの強度を計算する方法を決定します |
| Smooth | スムーズな法線マップを生成します |
| Sharp | ゾーベルフィルタとしても知られています。標準よりもシャープな法線マップを生成します。 |
| Wrap Mode | テクスチャをタイルしたときの処理 |
| Repeat | テクスチャを繰り返しタイル |
| Clamp | テクスチャの端をストレッチ |
| Filter Mode | テクスチャが3Dへの変換によりストレッチされたときのフィルタ処理を選択 |
| Point | テクスチャを近くでみたときにブロック状になります |
| Bilinear | テクスチャを近くでみたときにぼやけます |
| Trilinear | Bilinearと同じですが、テクスチャはミップマップ間でもぼやけます |
| Aniso Level | 急な角度から眺めたときのテクスチャ品質を向上させます。床や地面のテクスチャに良い。 以下 を参照して下さい。 |

「テクスチャ インポータの GUI 設定」
| Filter Mode | テクスチャが3Dへの変換によりストレッチされたときのフィルタ処理を選択 |
| Point | テクスチャを近くでみたときにブロック状になります |
| Bilinear | テクスチャを近くでみたときにぼやけます |
| Trilinear | Bilinearと同じですが、テクスチャはミップマップ間でもぼやけます |

テクスチャインポータのCursor設定
| Wrap Mode | テクスチャをタイルしたときの処理 |
| Repeat | テクスチャを繰り返しタイル |
| Clamp | テクスチャの端をストレッチ |
| Filter Mode | テクスチャが3Dへの変換によりストレッチされたときのフィルタ処理を選択 |
| Point | テクスチャを近くでみたときにブロック状になります |
| Bilinear | テクスチャを近くでみたときにぼやけます |
| Trilinear | Bilinearと同じですが、テクスチャはミップマップ間でもぼやけます |

「テクスチャ インポータの反射設定」
| Mapping | これにより、テクスチャがキューブ マップにどのようにマッピングされるかが決まります。 |
| Sphere Mapped | 「球体状」のキューブ マップにテクスチャをマッピングします。 |
| Cylindrical | テクスチャを円柱にマッピングします。円柱のようなオブジェクトに反射を使用したい場合に使用します。 |
| Simple Sphere | テクスチャを簡単な球体にマッピングし、回転する際に反射を変形させます。 |
| Nice Sphere | テクスチャを球体にマッピングし、回転時に変形させますが、テクスチャのラップは確認できます。 |
| 6 Frames Layout | テクスチャは立方体の六つの面にキューブマップのレイアウトを展開し、十字架の形か、列順の画像( +x -x +y -y +z -z)をさらに縦か横か選択できます |
| Fixup edge seams | (ポイントライトのみ)光沢の強い反射光のある画像イメージのエッジのつなぎ目の画像乱れを取り除きます |
| Filter Mode | テクスチャが3Dへの変換によりストレッチされたときのフィルタ処理を選択 |
| Point | テクスチャを近くでみたときにブロック状になります |
| Bilinear | テクスチャを近くでみたときにぼやけます |
| Trilinear | Bilinearと同じですが、テクスチャはミップマップ間でもぼやけます |
| Aniso Level | 急な角度から眺めたときのテクスチャ品質を向上させます。床や地面のテクスチャに良い。 以下 を参照して下さい。 |
制御するのにグレースケール テクスチャを使用する方法です。 これは、移動する雲の作成や、密集する葉の印象を与えるのに便利です。 Light ページにこれに関する詳細が全て記載されていますが、テクスチャを使用可能にするには、 を「Cookie」に設定する必要があります。

「テクスチャ インポータの Cookie 設定」
| Light Type | テクスはが適用されるライトの種類。 (スポット ライト、ポイント ライト、ディクショナリ ライトが該当します)。 ディクショナリ ライトの場合、このテクスチャはタイルになるため、テクスチャ インスペクタでは、適切な効果を得るには、スポット ライトに対して、エッジ モードを「Repeat」に設定し、クッキーテクスチャのエッジを黒一色のままにしておく必要があります。 テクスチャ インスペクタで、エッジ モードを「Clamp」に設定する必要があります。 |
| Mapping | これにより、テクスチャがキューブ マップにどのようにマッピングされるかが決まります。 |
| Sphere Mapped | 「球体状」のキューブ マップにテクスチャをマッピングします。 |
| Cylindrical | テクスチャを円柱にマッピングします。円柱のようなオブジェクトに反射を使用したい場合に使用します。 |
| Simple Sphere | テクスチャを簡単な球体にマッピングし、回転する際に反射を変形させます。 |
| Nice Sphere | テクスチャを球体にマッピングし、回転時に変形させますが、テクスチャのラップは確認できます。 |
| 6 Frames Layout | テクスチャは立方体の六つの面にキューブマップのレイアウトを展開し、十字架の形か、列順の画像( +x -x +y -y +z -z)をさらに縦か横か選択できます |
| Fixup edge seams | (ポイントライトのみ)光沢の強い反射光のある画像イメージのエッジのつなぎ目の画像乱れを取り除きます |
| Alpha From Greyscale | 有効にすると、アルファ透過チャンネルが画像の既存の明るさと暗さの値で生成されます。 |

「テクスチャインポータでのライトマップ設定」
| Filter Mode | テクスチャが3Dへの変換によりストレッチされたときのフィルタ処理を選択 |
| Point | テクスチャを近くでみたときにブロック状になります |
| Bilinear | テクスチャを近くでみたときにぼやけます |
| Trilinear | Bilinearと同じですが、テクスチャはミップマップ間でもぼやけます |
| Aniso Level | 急な角度から眺めたときのテクスチャ品質を向上させます。床や地面のテクスチャに良い。 以下 を参照して下さい。 |

「詳細テクスチャ インポータ設定ダイアログ」
| Non Power of 2 |テクスチャが 2 のべき乗サイズでない場合、これにより、インポート時のスケーリング動作が定義されます (詳細については、下記の Texture Sizes を参照)。 | |
| None | GUITexture コンポーネントでと併用するのに、テクスチャが次のより大きい2 のべき乗サイズに追加されます。 |
| To nearest | テクスチャがインポート時に最も近い 2 のべき乗に縮小拡大されます。 例えば、257x511 のテクスチャは、256x512 になります。 PVRTC 形式には、テクスチャを正方形 (幅と高さが等しい) にする必要があるため、最終的なサイズは 512x512 に拡大されます。 |
| To larger | テクスチャがインポート時に次に大きい 2 のべき乗に縮小拡大されます。 例えば、257x511 のテクスチャは、512x512 になります。 |
| To smaller | テクスチャがインポート時に次に小さい 2 のべき乗に縮小拡大されます。 例えば、257x511 のテクスチャは、256x256 になります。 |
| Generate Cube Map | 各種生成方法を使用して、テクスチャからキューブ マップを生成します。 |
| Spheremap | テクスチャを球状のキューブマップにマッピング |
| Cylindrical | テクスチャを円柱にマッピング。オブジェクトの反射光を円柱状にしたい場合に使用 |
| SimpleSpheremap | テクスチャをシンプルな球にマッピング、回転のときには反射光は崩れます |
| NiceSpheremap | テクスチャを球にマッピング、回転のときには反射光は崩れますがテクスチャはラッピングします |
| FacesVertical | テクスチャは立方体の六つの面を縦に展開し順序は +x -x +y -y +z -z |
| FacesHorizontal | テクスチャは立方体の六つの面を横に展開し順序は +x -x +y -y +z -z |
| CrossVertical | テクスチャは立方体の六つの面を縦長の十字架として展開 |
| CrossHorizontal | テクスチャは立方体の六つの面を横長の十字架として展開 |
| Read/Write Enabled | これを選択すると、スクリプトからテクスチャ データにアクセスできます (GetPixels、SetPixels と その他の Texture2D 機能)。 しかし、テクスチャデータのコピーが作成され、テクスチャ アセットに必要なメモリ量を 2 倍にします。 本当に必要な場合にのみ使用してください。 非圧縮および DTX 圧縮テクスチャにのみ有効であり、その他の圧縮テクスチャから読み取ることはできません。 デフォルトでは、無効になっています。 |
| Import Type | 画像データの処理方法 |
| Default | 標準的なテクスチャ |
| Normal Map | テクスチャを法線マップとして処理(他のオプションを有効にします) |
| Lightmap | テクスチャをライトマップとして処理(他のオプションを無効にします) |
| Alpha from grayscale | (Defaultのみ)画像の明度情報からアルファチャンネルを生成 |
| Create from grayscale | (Normal Mapのみ)画像の明度からマップを生成 |
| Bypass sRGB sampling | (Defaultのみ)ガンマ情報を考慮せず、画像の色をそのまま使用(テクスチャがGUIや画像データ以外をエンコードする際に便利) |
| Generate Mip Maps | これを選択すると、ミニ マップの生成が有効になります。 ミニ マップはより小さいテクスチャで、テクスチャが画面上で非常に小さい場合に使用されます。 詳細については、下の Mip Maps を参照してください。 |
| In Linear Space | ミップマップをリニアカラー空間で生成する |
| Border Mip Maps | これを選択すると、色が下位のミップ レベルの端ににじみ出ることがなくなります。 ライト Cookie (下記参照) に使用されます。 |
| Mip Map Filtering | 画質を最適化できるミップ マップ フィルタリングには次の 2 つの方法があります。 |
| Box | ミップ マップをフェードアウトする最も簡単な方法。ミップ レベルは、サイズが小さくなるに連れ、より滑らかになります。 |
| Kaiser | 鋭角化カイザー アルゴリズムは、サイズが小さくなるに連れ、ミップ マップで実行されます。 テクスチャが遠くでぼやけが多すぎる場合、このオプションを試してください。 |
| Fade Out Mips | ミップ レベルが上がるに連れ、ミップ マップをグレーにフェードするのに、これを有効にします。 これは、詳細マップに使用されます。 一番左のスクロールは、フェードアウトを始める最初のミップ レベルです。 一番右のスクロールは、テクスチャが完全にグレーアウトするミップレベルを定義します。 |
| Wrap Mode | テクスチャをタイルしたときの処理 |
| Repeat | テクスチャを繰り返しタイル |
| Clamp | テクスチャの端をストレッチ |
| Filter Mode | テクスチャが3Dへの変換によりストレッチされたときのフィルタ処理を選択 |
| Point | テクスチャを近くでみたときにブロック状になります |
| Bilinear | テクスチャを近くでみたときにぼやけます |
| Trilinear | Bilinearと同じですが、テクスチャはミップマップ間でもぼやけます |
| Aniso Level | 急な角度から眺めたときのテクスチャ品質を向上させます。床や地面のテクスチャに良い。 以下 を参照して下さい。 |
プラットフォームごとの無効化
異なるプラットフォームを作成する場合、対象のプラットフォームに対するテクスチャの解像度やサイズ、画質を考慮する必要があります。 これらのオプションをデフォルト設定にしつつ、特定のプラットフォームで特定の値を割当てることができます。

「すべてのプラットフォーム用のデフォルト設定」
| Max Texture Size | インポートされたテクスチャの最大サイズ。 アーティストは、大きなテクスチャを扱いたい場合が多くあります。これで、テクスチャを適切なサイズに縮小します。 |
| Texture Format | テクスチャに対して使用される内部表示。 サイズと画質間でのトレードオフとなります。 下記の例では、256 x 256 ピクセルのゲーム内テクスチャの最終サイズを示しています。 |
| Compressed | 圧縮された RGB テクスチャ。 これは、デフューズ テクスチャの最も一般的な形式になります。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| 16 bit | 低画質 True Color。 16 段階の赤、緑、青、アルファがあります。 |
| Truecolor | Truecolor、最高画質になります。 256x256 テクスチャの場合は、256 KB。 |
を に設定している場合、 は異なる値になります。
デスクトップ
| Texture Format | テクスチャに対して使用される内部表示。 サイズと画質間でのトレードオフとなります。 下記の例では、256 x 256 ピクセルのゲーム内テクスチャの最終サイズを示しています。 |
| RGB Compressed DXT1 | 圧縮された RGB テクスチャ。 これは、デフューズ テクスチャの最も一般的な形式になります。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| RGBA Compressed DXT5 | 圧縮された RGBA テクスチャ。 これは、デフューズおよびスペキュラ制御テクスチャに使用される主な形式になります。 1 バイト/ピクセル (256x256 テクスチャの場合は、64 KB)。 |
| RGB 16 bit | アルファなしの 65,000 色。 圧縮 DXT 形式は、メモリをあまり使用せず、通常は見た目もよくなります。 256x256 テクスチャの場合は、128 KB。 |
| RGB 24 bit | アルファなしの TrueColor。 256x256 テクスチャの場合は、192 KB。 |
| Alpha 8 bit | 色なしの高画質アルファ チャンネル。 256x256 テクスチャの場合は、64 KB。 |
| RGBA 16 bit | 低画質 True Color。 16 段階の赤、緑、青、アルファがあります。 圧縮 DXT 形式は、メモリをあまり使用せず、通常は見た目もよくなります。 256x256 テクスチャの場合は、128 KB。 |
| RGBA 32 bit | アルファのある Truecolor。最高画質になります。 256x256 テクスチャの場合は、256 KBで、費用がかかります。 ほとんどの場合、DXT5は、はるかに小さいサイズで十分な画質を提供します。 DXT 圧縮は目に見える画質損失を生じるため、これは主に法線マップに使用します。 |
iOS
| Texture Format | テクスチャに対して使用される内部表示。 サイズと画質間でのトレードオフとなります。 下記の例では、256 x 256 ピクセルのゲーム内テクスチャの最終サイズを示しています。 |
| RGB Compressed PVRTC 4 bits | 圧縮された RGB テクスチャ。 これは、デフューズ テクスチャの最も一般的な形式になります。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| RGBA Compressed PVRTC 4 bits | 圧縮された RGBA テクスチャ。 これは、透明性のあるデフューズおよびスペキュラ制御テクスチャに使用される主な形式になります。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| RGB Compressed PVRTC 2 bits | 圧縮された RGB テクスチャ。 デフューズ テクスチャに適したより低い画質形式。 ピクセルあたり 2 ビット (256x256 テクスチャの場合は、16 KB)。 |
| RGBA Compressed PVRTC 2 bits | 圧縮された RGBA テクスチャ。 デフューズおよびスペキュラ コントロール テクスチャに適したより低い画質形式。 ピクセルあたり 2 ビット (256x256 テクスチャの場合は、16 KB)。 |
| RGB Compressed DXT1 | 圧縮された RGB テクスチャ。 この形式は iOS ではサポートされていませんが、デスクトップとの下位互換性に対して維持されます。 |
| RGBA Compressed DXT5 | 圧縮された RGBA テクスチャ。 この形式は iOS ではサポートされていませんが、デスクトップとの下位互換性に対して維持されます。 |
| RGB 16 bit | アルファなしの 65,000 色。 PVRTC 形式よりも多くのメモリを使用しますが、UI または階調度のないクリスプ テクスチャにより適している場合があります。 256x256 テクスチャの場合は、128 KB。 |
| RGB 24 bit | アルファなしの TrueColor。 256x256 テクスチャの場合は、192 KB。 |
| Alpha 8 bit | 色なしの高画質アルファ チャンネル。 256x256 テクスチャの場合は、64 KB。 |
| RGBA 16 bit | 低画質 True Color。 16 段階の赤、緑、青、アルファがあります。 PVRTC 形式よりも多くのメモリを使用しますが、正確なアルファ チャンネルが必要な場合に便利な場合があります。 256x256 テクスチャの場合は、128 KB。 |
| RGBA 32 bit | アルファのある Truecolor。最高画質になります。 256x256 テクスチャの場合は、256 KBで、費用がかかります。 ほとんどの場合、PVRTCは、はるかに小さいサイズで十分な画質を提供します。 |
| Compression quality | Fastで高パフォーマンス、Bestで高画質、Normalでふたつのバランスをとります |

Android
| Texture Format | テクスチャに対して使用される内部表示。 サイズと画質間でのトレードオフとなります。 下記の例では、256 x 256 ピクセルのゲーム内テクスチャの最終サイズを示しています。 |
| RGB Compressed DXT1 | 圧縮された RGB テクスチャ。 Nvidia Tegra でサポートされています。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| RGBA Compressed DXT5 | 圧縮された RGBA テクスチャ。 Nvidia Tegra でサポートされています。 ピクセルあたり 6 ビット (256x256 テクスチャの場合は、64 KB)。 |
| RGB Compressed ETC 4 bits | 圧縮された RGB テクスチャ。 これは、Android プロジェクトのデフォルトのテクスチャ形式になります。 ETC1 は、OpenGL ES 2.0 の一部で、すべての OpenGL ES 2.0 GPU でサポートされています。 アルファはサポートしていません。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| RGB Compressed PVRTC 2 bits | 圧縮された RGB テクスチャ。 Imagination PowerVR GPU でサポートされています。 ピクセルあたり 2 ビット (256x256 テクスチャの場合は、16 KB)。 |
| RGBA Compressed PVRTC 2 bits | 圧縮された RGBA テクスチャ。 Imagination PowerVR GPU でサポートされています。 ピクセルあたり 2 ビット (256x256 テクスチャの場合は、16 KB)。 |
| RGB Compressed PVRTC 4 bits | 圧縮された RGB テクスチャ。 Imagination PowerVR GPU でサポートされています。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| RGBA Compressed PVRTC 4 bits | 圧縮された RGBA テクスチャ。 Imagination PowerVR GPU でサポートされています。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| RGB Compressed ATC 4 bits | 圧縮された RGB テクスチャ。 Qualcomm Snapdragon でサポートされています。 ピクセルあたり 4 ビット (256x256 テクスチャの場合は、32 KB)。 |
| RGBA Compressed ATC 8 bits | 圧縮された RGBA テクスチャ。 Qualcomm Snapdragon でサポートされています。 ピクセルあたり 6 ビット (256x256 テクスチャの場合は、64 KB)。 |
| RGB 16 bit | アルファなしの 65,000 色。 圧縮形式よりも多くのメモリを使用しますが、UI または階調度のないクリスプ テクスチャにより適している場合があります。 256x256 テクスチャの場合は、128 KB。 |
| RGB 24 bit | アルファなしの TrueColor。 256x256 テクスチャの場合は、192 KB。 |
| Alpha 8 bit | 色なしの高画質アルファ チャンネル。 256x256 テクスチャの場合は、64 KB。 |
| RGBA 16 bit | 低画質 True Color。 アルファ チャンネルのあるテクスチャに対するデフォルトの圧縮。 256x256 テクスチャの場合は、128 KB。 |
| RGBA 32 bit | アルファのある Truecolor。アルファのあるテクスチャに対する最高画質圧縮になります。 256x256 テクスチャの場合は、256 KB。 | |
| Compression quality | Fastで高パフォーマンス、Bestで高画質、Normalでふたつのバランスをとります |
Tegra など特定のハードウェアを対象としていない場合、ETC1 圧縮の使用をお勧めします。 必要な場合、外部のアルファ チャンネルを格納し、より低いテクスチャ フットプリントからメリットが得られます。 テクスチャにアルファ チャンネルを本当に格納したい場合、RGBA16 ビットは、すべてのハードウェア ベンダーが対応している圧縮方法になります。
アプリケーションがサポートされていないテクスチャ圧縮を使用する場合、テクスチャは、RGBA 32 に解凍され、圧縮テクスチャと共にメモリに格納されます。 この場合、テクスチャの解凍に無駄な時間を使い、2 回格納することでメモリも無駄になります。 これはまた、レンダリング パフォーマンスに大きな悪影響を及ぼす場合があります。
Flash
| Format | Image format |
| RGB JPG Compressed | RGB image data compressed in JPG format |
| RGBA JPG Compressed | RGBA image data (ie, with alpha) compressed in JPG format |
| RGB 24-bit | Uncompressed RGB image data, 8 bits per channel |
| RGBA 32-bit | Uncompressed RGBA image data, 8 bits per channel |
詳細
対応形式
Unity は、次の画像ファイル形式をサポートしています。 PSD、TIFF、JPG、TGA、PNG、GIF、BMP、IFF、PICT。 Unity はマルチ レイヤー PSD & TIFF ファイルを適切にインポートできます。 これらはインポート時に自動的に平坦化されますが、レイヤーは、それ自体アセットに維持されるため。これらのファイルタイプをネイティブに使用する際も作業が無駄になることはありません。 これは、Photoshop から使用できるテクスチャのコピーの 1 つを 3D モデリング アプリケーションから Unity に作成できるので重要です。
テクスチャ サイズ
理想的には、テクスチャは両側が 2 のべき乗になります。 これらのサイズは次のようになります。 2、4、8、16、32、64、128、256、512、1024 または 2048 ピクセル。 テクスチャは正方形である必要はありません。つまる、幅と高さは異なっていても構いません。
Unity では別のテクスチャ サイズ (2 のべき乗以外) を使用することができます。 2 のべき乗以外のテクスチャ サイズは、GUI Textures で使用されるのがベストですが、他で使用される場合は、非圧縮の RGBA 32 ビット形式に変換されます。 つまり、このテクスチャ サイズは、ビデオ メモリ (PVRT (iOS)/DXT (デスクトップ) 圧縮テクスチャ) を使用するため、ロードやレンダリングにより時間がかかります (iOS モード時)。 一般に、2 のべき乗以外のサイズは GUI 目的にのみ使用します。
2 のべき乗以外のテクスチャ アセット、インポート設定に詳細テクスチャ タイプの「Non Power of 2」オプションを使用して、インポート時に拡大できます。 Unity は、要求に応じて、テクスチャの内容を縮小拡大し、ゲーム内では、このテクスチャの内容は他のテクスチャ同様動作するため、圧縮でき、非常に高速でロードされます。
2 のべき乗以外のテクスチャの潜在的な問題として、Unityが内部処理で2のべき乗のテクスチャに変換し、ストレッチ処理がわずかな画像の乱れを引き起こすことがあります。
UV マッピング
3D モデルに 2D テクスチャをマッピングすると、ある種のラッピングが行われます。 これは、UV mapping と呼ばれ、3D モデリング アプリケーションで行われます。 Unity 内で、Materials を使用して、テクスチャをスケールおよび移動させることができます。 法線および詳細マップのスケーリングは特に便利です。
ミップ マップ
ミップ マップは、徐々に縮小していく画像で、リアルタイムの 3D エンジンでのパフォーマンを最適化するのに使用されます。 カメラから遠くにあるオブジェクトは、より小さいテクスチャを使用します。 ミップ マップを使用することで、33% のメモリしか使用しませんが、使用しないと、大きなパフォーマンス損失が生じます。 必ずゲーム内のテクスチャにミップ マップを使用した方がよいでしょう。小型化されないテクスチャの場合が唯一の例外です。
法線マップ
法線マップは、ポリゴンの少ないモデルをより多くの細部を持っているように見せる場合に法線マップ シェーダがあるように見せる場合にに使用されます。 Unity は、RGB 画像として符号化された法線マップを使用します。 グレースケールの高さマップ画像から法線マップを生成するオプションもあります。
詳細マップ
地形を作成したい場合は、通常、メイン テクスチャを使用して、くさや岩、砂などのエリアがどこにあるかを示します。 地形が適切なサイズの場合、最終的にぼやけてしまいます。 メイン テクスチャが近づくに連れ、Detail textures は、細かい細部をフェードインすることでこの事実を隠します。
詳細テクスチャを描画時に、ニュートラルのグレーが非表示になり、白がメイン テクスチャを 2 倍明るく、黒がメイン テクスチャを完全な黒に見せます。
反射 (キューブ マップ)
反射マップにテクスチャを使用したい場合 (例:「Reflective」組み込みシェーダを使用)、Cubemap Textures を使用する必要があります。
異方性フィルタリング
異方性フィルタリングは、グレージング角から表示された時に、レンダリング費用をある程度犠牲にして画質を向上します (この費用は全体的にグラフィック カードに依存します)。 異方性レベルを上げるのは通常、地面および床テクスチャにとってよいアイディアです。 Quality Settings では、異方性フィルタリングは、すべてのテクスチャに強制的に実行できるか、全体的に無効にできます。
「地面テクスチャに使用される非異方性 (左)/最大異方性(右)」
comp-AudioGroup
これらの Component は、Unity の音声を実行します。
- Audio Listener - これを Camera に追加し、3D 位置音声を取得します。
- Audio Source - このコンポーネントを GameObject に追加し、音声を再生させます。
class-AudioListener
Audio Listener は、マイクのような機器として機能します。 これは、シーン内で所定の Audio Source 空の入力を受信し、コンピュータのスピーカーを通じて音声を再生します。 ほとんどのアプリケーションで、メインの Camera にリスナーを追加するのは最も有用です。 オーディオ リスナーが Reverb Zone の境界内にある場合、シーン内のすべての可聴音声に反響が適用されます。 (PRO のみ) さらに、Audio Effects をリスナーに適用でき、シーン内のすべての可聴音声に適用されます。

メイン カメラに追加されたオーディオ リスナー
プロパティ
オーディオ リスナーにはプロパティはありません。 作業するために追加する必要があるだけです。 デフォルトでは、常にメイン カメラに追加されます。
詳細
オーディオ リスナーは、Audio Sources と連携し、ゲームのための聴覚体験を作成できます。 オーディオ リスナーがシーン内で GameObject に追加されると、リスナーに十分近いソースが選択され、コンピュータのスピーカーに出力されます。 各シーンで適切に機能できるオーディオ リスナーは 1 つだけです 。
ソースが 3D の場合 (Audio Clip でのインポート設定を参照)、リスナーは、3D ワールドにおける音声の位置や、速度、方向をエミュレートします (Audio Source で、減衰レベルや 3D/2D の動作を微調整できます)。 2D は、3D 処理を無視します。 例えば、街を歩いているキャラクターがナイト クラブに入った場合、ナイト クラブの音楽はおそらく 2D で、クラブ内の個々のキャラクターの声は、Unity が扱う現実的な位置づけにより、モノになっているはずです。
オーディオ リスナーをメイン カメラまたはプレイヤーを表す GameObject のいずれかに追加する必要があります。 両方試して、ゲームに最適な方を見つけてください。
ヒント
- 各シーンで使用できるオーディオ リスナーは 1 つだけです。
- メニューから Audio Manager を使用して、プロジェクト全体のオーディオ設定をにアクセスできます。
- モノとステレオ音声の詳細については、Audio Clip コンポーネント ページを参照してください。
class-AudioSource
Audio Source は、シーンで Audio Clip を再生します。 オーディオ クリップが 3D クリップの場合、ソースは、所定の位置で再生され、距離が離れると弱まります。 オーディオはスピーカー間で広がり (ステレオ - 7.1)(「スプレッド」)、3D と 2D 間で変わります (「パン レベル」)。 これは、フォールオフ曲線で長距離で制御できます。 また、listener が 1 時間以内または、複数の Reverb Zones である場合、反響がソースに適用されます。 (PRO のみ) より鮮やかなオーディオを得るのに、個々のフィルターを各オーディオ ソースに適用できます。 詳細については、Audio Effects を参照してください。

「Scene View でのオーディオ ソース機器と inspector でのその設定。」
!!プロパティ
| Audio Clip | 再生される音声クリップを参照します。 |
| Mute | 有効にすると、音声は再生されますが、ミュートになります。. |
| Bypass Effects | オーディオ ソースに適用されるフィルター効果を素早く「バイパス」します。 簡単にすべての効果をオン・オフできます。 |
| Play On Awake | 有効にすると、シーン開始時に音声の再生が始まります。 無効にすると、スクリプティングから「Play()」を使用して開始する必要があります。 |
| Loop | 有効にすると、最後に達した時に、「オーディオ クリップ」がループします。 |
| Priority | シーンにある他のすべてのオーディオ ソースの間でこのオーディオ ソースの優先度を決定します。 (優先度: 0 = 最重要、 256 = 最も重視しない、 デフォルト = 128)。 音楽トラックが時々スワップアウトするのを避けるには、0 を使用します。 |
| Volume | Audio Listener からの 1 つの世界単位 (1 メートル) ノ距離での音声の大きさ。 |
| Pitch | 「オーディオ クリップ」での減速 / 加速によるピッチの変化量。 1 が普通の再生速度になります。 |
| 3D Sound Settings | 3D 音声の場合にオーディオ ソースに適用される設定。 |
| Pan Level | 3D エンジンがオーディオ ソースに与える効果の程度を設定します。 |
| Spread | スピーカー空間内の 3D ステレオまたはマルチチャンネル音声に広がり角を設定します。 |
| Doppler Level | このオーディオ ソースに適用するドップラー効果の量を決定します (0 に設定すると、効果は適用されません)。 |
| Min Distance | MinDistance 内は、音声は最も低くなります。 MinDistance 外では、音声が弱まり始めます。 音声の MinDistance を増やすと、3D ワールドでの音声が「大きく」なります。減らすと、3D ワールドでの音声が「小さく」なります。 |
| Max Distance | 音が弱まるのを止める距離。 この地点を超えると、リスナーからの MaxDistance 単位で到達する音量を維持し、これ以上下がることはありません。 |
| Rolloff Mode | 音声がフェードする速度。 この値が高いほど、音声を聞く前にリスナーがより近づく必要があります (グラフによって決定されます)。 |
| Logarithmic Rolloff | オーディオ ソースに近づくと音声が大きくなりますが、オブジェクトから離れると、かなりの速さで低くなります。 |
| Linear Rolloff | オーディオ ソースから離れるほど、聞こえにくくなります。 |
| Custom Rolloff | オーディオ ソースからの音声が、ロール オフのグラフの設定状態に応じて動作します。 |
| 2D Sound Settings | 3D 音声の場合にオーディオ ソースに適用される設定。 |
| Pan 2D | 3D エンジンがオーディオ ソースに与える効果の程度を設定します。 |
Rolloff の種類
3つの Rolloff モードには、 対数、直線およびカスタム Rolloff があります。 カスタム Rolloff は、後述するように、音量距離曲線を修正することで修正できます。 対数または直線に設定する際に、音量距離関数を修正しようとすると、自動的にカスタム Rolloff.に変わります。

「オーディオ ソースが持つことができる Rolloff モード。」
距離関数
オーディオ ソースとオーディオ リスナー間の距離の関数として修正できる音声のプロパティがいくつかあります。
Volume: 距離に対する振幅 (0.0 - 1.0)。
Pan: 距離に対する左 (-1.0) 対右 (1.0)。
Spread: 距離に対する角度 (0.0 - 360.0°)。
Low-Pass (オーディオ ソースにローパスフィルタが追加されている場合のみ): 距離に対するカットオフ頻度 (22000.0 - 360.0°)。

「音量、パン、拡散、ローパス音声フィルタのための距離関数。 オーディオ リスナーまでの現在の距離にグラフで印が付けられます。」
距離関数を修正するには、曲線を直接編集します。 詳細については、Editing Curves を参照してください。
オーディオ ソースの作成
オーディオ ソースは、割り当てられた「オーディオ クリップ」がないと、何もしません。 クリップは、再生される実際の音声ファイルです。 ソースは、そのクリップの再生を開始・停止したり、その他のオーディオ プロパティを修正するためのコントローラのようなものです。
オーディオ ソースの新規作成
- Unity プロジェクトにオーディオ ファイルをインポートします。 これらがオーディオ クリップになります。
- メニューバーから に移動します。
- 新しい GameObject を選択して、 を選択します。
- インスペクタでオーディオ ソース コンポーネントの「オーディオ クリップ」プロパティを割り当てます。
注意: Assets フォルダにある 1 つの「オーディオ クリップ」に対してのみオーディオ ソースを作成したい場合、その「オーディオ クリップ」をシーン ビューにドラッグ & ドロップすると、そのシーン ビューに対して、「オーディオ ソース」ゲーム オブジェクトが自動的に作成されます。オーディオクリップをすでにあるゲームオブジェクトにドラッグ&ドロップすると、オーディオクリップをと新しいオーディオソースをアタッチする(すでにオーディオソースがなかった場合)ことになります。もしオブジェクトにオーディオソースがあった場合、ドラッグ&ドロップすることで、新しいオーディオクリップはすでにあるオーディオソースを上書きします。
プラットフォーム固有の詳細

iOS
携帯プラットフォーム上では、より高速に解凍するため、圧縮オーディオは MP3 として符号化されます。 この圧縮により、クリップの最後のサンプルが削除され、「完全ループ」のクリップを破壊する可能性があることに注意してください。 サンプルのクリッピングを避けるため、クリップが MP3 サンプルの境界にあることを確認してください (これを行うためのツールが広く利用可能です)。 パフォーマンス上の理由から、オーディオ クリップは、Apple ハードウェア コーデックを使用して再生できます。 これを有効にするには、インポート設定の「ハードウェアを使用する」チェックボックスにチェックを入れます。 詳細については、Audio Clip を参照してください。

Android
携帯プラットフォーム上では、より高速に解凍するため、圧縮オーディオは MP3 として符号化されます。 この圧縮により、クリップの最後のサンプルが削除され、「完全ループ」のクリップを破壊する可能性があることに注意してください。 サンプルのクリッピングを避けるため、クリップが MP3 サンプルの境界にあることを確認してください (これを行うためのツールが広く利用可能です)。
class-AudioEffect
AudioSources または AudioListener がアタッチされているのと同じゲームオブジェクトにフィルタコンポーネントを追加することによってフィルタコンポーネントを適用することができます。フィルタ効果は次のようなコンポーネントの順序で実行さます:

フィルタコンポーネントを有効/無効にするとことでフィルタをスキップすることが出来る。高度に最適化されているものの、いくつかのフィルタは、CPU負荷が大きいです。Audioタブのprofiler のオーディオCPU使用率を監視することができます。
各フィルタータイプの詳細については、次ページを参照してください。
Page last updated: 2012-11-11class-AudioLowPassFilter
Audio Low Pass Filter フィルタは、AudioSource の低周波または AudioListener に達しているすべての音声を渡し、「Cutoff Frequency」よりも高い周波数をカットします。
「Lowpass resonance Q」(別名: ローパス共鳴品質係数)は、フィルタの自己共鳴が抑制される度合いを決定します。
「Lowpass resonance Q」が高いということは、エネルギー損失率が下がる、つまり、共振の消滅がより遅くなるということです。
Audio Low Pass Filter は、 Rolloff 曲線と関連付けられており、これにより、オーディオ ソースとオーディオ リスナー間の距離にわたって、「Cutoff Frequency」を設定できます。

「オーディオ ロー パス フィルタ プロパティはインスペクタにあります。」
プロパティ
| Cutoff Frequency | ロー パス カットオフ周波数 (単位: Hz)。 10.0 ~ 22000.0。 デフォルト = 5000.0。 |
| Lowpass Resonance Q | ロー パス共鳴 Q の値。 範囲は、10.0 ~ 10.0。 デフォルト = 1.0。 |
ロー パス フィルタの追加
ロー パス フィルタを所定のオーディオ ソースに追加するには、インスペクタ内でオブジェクトを選択し、[Component->Audio->Audio Low Pass Filter] を選択します。
ヒント
- 音声は環境によってその伝播の仕方が異なります。例えば、視覚的な霧の効果の補完は、微妙なローパスのオーディオ リスナーへの追加になります。
- ドアの後ろで発せられる音声の高周波はリスナーには届きません。ドアを開く際に、「Cutoff Frequency」を変更するだけ、これをシミュレートできます。
class-AudioHighPassFilter
Audio High Pass Filter は、オーディオ ソースの高周波数を渡し、Cutoff Frequency よりも低い周波数の信号をカットオフします。
Highpass resonance Q(別名: ハイパス共鳴品質係数)は、フィルタの自己共鳴が抑制される度合いを決定します。 Highpass resonance Qが高いということは、エネルギー損失率が下がる、つまり、共振の消滅がより遅くなるということです。

オーディオ ハイ パス フィルタ プロパティはインスペクタにあります。
プロパティ
| Cutoff Frequency | ハイパス カットオフ周波数 (単位: Hz)。 範囲は、10.0 〜 22000.0。 デフォルト = 5000.0)。 |
| Highpass Resonance Q | ハイパス共鳴 Q の値。 範囲は、10.0 〜 10.0。 デフォルト = 1.0。 |
ハイ パス フィルタの追加
ハイ パス フィルタを所定のオーディオ ソースに追加するには、インスペクタ内でオブジェクトを選択し、[Component->Audio->Audio High Pass Filter] を選択します。
Page last updated: 2012-11-13class-AudioEchoFilter
Audio Echo Filter は、所定の「Delay」後に音声を繰り返し、「Decay Ratio」に基づいて、繰り返しを減衰します。
「Wet Mix」はフィルタリングされた信号の振幅を決定しますが、「Dry Mix」は、フィルタリングされていない音声出力の振幅を決定します。

「 Audio Echo Filterはインスペクタにあります。」
プロパティ
| Delay | エコー遅延 (単位: ms)。10 ~ 5000。Default = 500. | |
| Decay Ratio | 遅延ごとのエコー減衰。 0 ~ 1. 1.0 = 減衰なし、0.0 = 合計減衰 (つまり、1 行だけの減衰)。 デフォルト = 0.5L。 | |
| Wet Mix | 出力に渡すエコー信号の音量。 0.0 ~ 1.0。 デフォルト = 1.0。 | |
| Dry Mix | 出力に渡す元の信号の音量。 0.0 ~ 1.0。 デフォルト = 1.0。 | |
エコーフィルタを追加
エコーフィルタを所定のオーディオ ソースに追加するには、インスペクタ内でオブジェクトを選択し、[Component->Audio->Audio Echo Filter] を選択します。
ヒント
- 固い平面は音の伝播を反映する。たとえば、大規模な峡谷のこだまは、Audio Echo Filterにより説得力をもたせることができる。
- 落雷や雷を経験すればわかるように、音は光より遅く伝播します。これをシミュレートするには、オーディオ エコーフィルターをイベントサウンドに追加し、Wet Mixに0.0をセットし、Delayの値をAudioSource とAudioListener の間の距離に調節する。
class-AudioDistorionFilter
Audio Distortion Filter は、AudioSource からの音声または AudioListener に達した音声を歪曲させます。

オーディオ歪みフィルタ プロパティはインスペクタにあります。
プロパティ
| Distortion | 歪み値。 0.0 〜 1.0。 デフォルト = 0.5。 |
歪みフィルタの追加
Audio Distortion Filter を選択した AudioSource または AudioListener に追加するには、インスペクタ内でオブジェクトを選択し、[Component->Audio->Audio Distortion Filter] を選択します。
ヒント
- Audio Distortion Filter を適用し、低品質の無線伝送の音声をシミュレートします。
class-AudioReverbFilter
Audio Reverb Filter は、Audio Clip を取り、個人に合わせてカスタマイズされたリバーブ効果を作成するよう歪曲化します。

「オーディオ リバーブ フィルタ プロパティはインスペクタにあります。」
プロパティ
| Reverb Preset | カスタムのリバーブ プリセットで、自身のカスタマイズされたリバーブを作成するユーザーを選択します。 |
| Dry Level | 出力でのドライ信号のミックス レベル (単位: Mb)。 範囲は、-10000.0 ~ 0.0 デフォルトは 0。 |
| Room | 低周波での室内効果のレベル (単位: Mb)。 範囲は、-10000.0 ~ 0.0 デフォルトは 0.0。 |
| RoomHF | 低周波に対する室内効果の高周波レベル (単位: Mb)。 範囲は、-10000.0 ~ 0.0 デフォルトは 0.0。 |
| Room LF | 室内効果の低周波レベル (単位: Mb)。 範囲は、-10000.0 ~ 0.0 デフォルトは 0.0。 |
| Decay Time | 低周波での反響減衰期間 (単位: 秒) 範囲は、0.1 ~ 20.0 デフォルトは 1.0。 |
| Decay HFRatio | 減衰 HF 率: 高周波対低周波減衰期間率。 範囲は、0.1 ~ 2.0 デフォルトは 0.5。 |
| Reflections Level | 室内効果に関連した初期反射レベル (単位: Mb)。 範囲は、-10000.0 ~ 1000.0 デフォルトは -10000.0。 |
| Reflections Delay | 室内効果に関連した後期反響レベル (単位: Mb)。 範囲は、-10000.0 ~ 2000.0 デフォルトは 0.0。 |
| Reverb Level | 室内効果に関連した後期反響レベル (単位: Mb)。 範囲は、-10000.0 ~ 2000.0 デフォルトは 0.0。 |
| Reverb Delay | 最初の反射に関連した後期反響遅延時間 (単位: Mb)。 範囲は、0.0 ~ 0.1 デフォルトは 0.04。 |
| HFReference | 基準高周波 (Hz)。 範囲は、20.0 ~ 20000.0 デフォルトは 5000.0。 |
| LFReference | 基準低周波 (Hz)。 範囲は、20.0 ~ 1000.0 デフォルトは 250.0。 |
| Reflections Delay | 室内効果に関連した後期反響レベル (単位: Mb)。 範囲は、-10000.0 ~ 2000.0 デフォルトは 0.0。 |
| Diffusion | 反響分散 (エコー密度)(単位: %) 範囲は、0.0 ~ 100.0 デフォルトは 100.0。 |
| Density | 反響密度 (モーダル密度)(単位: %) 範囲は、0.0 ~ 100.0 デフォルトは 100.0。 |
注意: これらの値は、「リバーブ プリセット」が「ユーザー」に設定されている場合のみ編集でき、そうでない場合は、グレー表示となり、各プリセットに対するデフォルト値を持ちます。
リバーブ パス フィルタの追加
リバーブ パス フィルタを所定のオーディオ ソースに追加するには、インスペクタ内でオブジェクトを選択し、[Component->Audio->Audio reverb Filter] を選択します。
Page last updated: 2012-11-11class-AudioChorusFilter
Audio Chorus Filter は、Audio Clip を取り、それを処理して、コーラス効果を作成します。
コーラス効果は、正弦波低周波数オシレータ (LFO) によって元の音声を変調します。 出力音声は、若干の違いのある同じ音声 (合唱に似た) を出す複数のソースです。

「オーディオ ハイ パス フィルタ プロパティはインスペクタにあります。」
プロパティ
| Dry Mix | 出力に渡す元の信号の音量。 0.0 ~ 1.0。 デフォルト = 0.5。 | |
| Wet Mix 1 | 最初のコーラス タップの音量。 0.0 ~ 1.0。 デフォルト = 0.5。 | |
| Wet Mix 2 | 2 番目ののコーラス タップの音量。 このタップは、最初のタップよりも位相が 90°ずれまています。 0.0 ~ 1.0。 デフォルト = 0.5。 | |
| Wet Mix 3 | 3 番目ののコーラス タップの音量。 このタップは、2 つ目ののタップよりも位相が 90°ずれまています。 0.0 ~ 1.0。 デフォルト = 0.5。 | |
| Delay | LFO の遅延 (単位: ms)。0.1 ~ 100.0。 | デフォルト = 40.0ms。 |
| Rate | LFO の変調速度 (単位: Hz)。 0.0 ~ 20.0。 デフォルト = 0.8 Hz。 | |
| Depth | コーラス変調度。 0.0 ~ 1.0。 デフォルト = 0.03。 | |
| Feed Back | コーラスのフィードバック。 フィルタのバッファにフィードバックされるウェット信号の量を制御します。 0.0 ~ 1.0。 デフォルト = 0.0。 |
コーラス フィルタの追加
コーラス フィルタを所定のオーディオ ソースに追加するには、インスペクタ内でオブジェクトを選択し、[Component->Audio->Audio Chorus Filter] を選択します。
ヒント
- フランジャーはコーラスの変形なので、コーラス フィルタを微調整して、フィードバックを下げ、遅延を減らすことでフランジャー効果を作成できます。
- 「Rate」と「Depth」を 0 に設定し、これらと「Delay」を微調整することで、簡単なドライ エコーを作成できます。
class-AudioReverbZone
Reverb Zone は、Audio Clip を取り、オーディオ リスナーがリバーブ ゾーンのどこにいるかに応じてオーディオ クリップを歪曲させます。 周辺効果のない場所から、ある場所に徐々に移りたい場合に使用します。 例えば洞窟に入って行く時などに使用します。

「オーディオ リバーブ ゾーン機能はインスペクタにあります。」
プロパティ
| Min Distance | 装置内の内円の半径を表します。これは、段階的なリバーブ ゾーンと完全なリバーブ ゾーンのあるゾーンを決定します。 |
| Max Distance | 装置内の外円の半径を表します。これは、効果がなく、リバーブが段階的に適用され始めるーゾーンのあるゾーンを決定します。 |
| Reverb Preset | リバーブ ゾーンによって使用されるリバーブ効果を決定します。 |
リバーブ ゾーンのプロパティの理解を深めるには、次の画像をクリックします。

「リバーブ ゾーンで音声がどのように働くか」
リバーブ ゾーンの追加
リバーブ ゾーンを所定のオーディオ ソースに追加するには、インスペクタ内でオブジェクトを選択し、[Component->Audio->Audio Reverb Zone] を選択します。
ヒント
- リバーブ ゾーンを混合して、結合効果を作成できます。
class-Microphone
マイク(Microphone)クラスはPCやモバイルデバイス上の物理的なマイクから入力をキャプチャするのに便利なクラスです。
このクラスで、内臓マイクからの録音開始および終了、利用可能であるオーディオ入力デバイス一覧の取得、入力デバイスの状態取得が可能です。
クラスを使用するうえで詳細な情報についてはスクリプトリファレンス(Scripting Reference)を参照下さい。

comp-DynamicsGroup
Unity has NVIDIA PhysX physics engine built-in. This allows for unique emergent behaviour and has many useful features.
Basics
To put an object under physics control, simply add a Rigidbody to it. When you do this, the object will be affected by gravity, and can collide with other objects in the world.
Rigidbodies
Rigidbodies are physically simulated objects. You use Rigidbodies for things that the player can push around, for example crates or loose objects, or you can move Rigidbodies around directly by adding forces to it by scripting.
If you move the Transform of a non-Kinematic Rigidbody directly it may not collide correctly with other objects. Instead you should move a Rigidbody by applying forces and torque to it. You can also add Joints to rigidbodies to make the behavior more complex. For example, you could make a physical door or a crane with a swinging chain.
You also use Rigidbodies to bring vehicles to life, for example you can make cars using a Rigidbody, 4 Wheel Colliders and a script applying wheel forces based on the user's Input.
You can make airplanes by applying forces to the Rigidbody from a script. Or you can create special vehicles or robots by adding various Joints and applying forces via scripting.
Rigidbodies are most often used in combination with primitive colliders.
Tips:
- You should never have a parent and child rigidbody together
- You should never scale the parent of a rigidbody
Kinematic Rigidbodies
A Kinematic Rigidbody is a Rigidbody that has the isKinematic option enabled. Kinematic Rigidbodies are not affected by forces, gravity or collisions. They are driven explicitly by setting the position and rotation of the Transform or animating them, yet they can interact with other non-Kinematic Rigidbodies.
Kinematic Rigidbodies correctly wake up other Rigidbodies when they collide with them, and they apply friction to Rigidbodies placed on top of them.
These are a few example uses for Kinematic Rigidbodies:
- Sometimes you want an object to be under physics control but in another situation to be controlled explicitly from a script or animation. For example you could make an animated character whose bones have Rigidbodies attached that are connected with joints for use as a Ragdoll. Most of the time the character is under animation control, thus you make the Rigidbody Kinematic. But when he gets hit you want him to turn into a Ragdoll and be affected by physics. To accomplish this, you simply disable the isKinematic property.
- Sometimes you want a moving object that can push other objects yet not be pushed itself. For example if you have an animated platform and you want to place some Rigidbody boxes on top, you should make the platform a Kinematic Rigidbody instead of just a Collider without a Rigidbody.
- You might want to have a Kinematic Rigidbody that is animated and have a real Rigidbody follow it using one of the available Joints.
Static Colliders
A Static Collider is a GameObject that has a Collider but not a Rigidbody. Static Colliders are used for level geometry which always stays at the same place and never moves around. You can add a Mesh Collider to your already existing graphical meshes (even better use the Generate Colliders check box), or you can use one of the other Collider types.
You should never move a Static Collider on a frame by frame basis. Moving Static Colliders will cause an internal recomputation in PhysX that is quite expensive and which will result in a big drop in performance. On top of that the behaviour of waking up other Rigidbodies based on a Static Collider is undefined, and moving Static Colliders will not apply friction to Rigidbodies that touch it. Instead, Colliders that move should always be Kinematic Rigidbodies.
Character Controllers
You use Character Controllers if you want to make a humanoid character. This could be the main character in a third person platformer, FPS shooter or any enemy characters.
These Controllers don't follow the rules of physics since it will not feel right (in Doom you run 90 miles per hour, come to halt in one frame and turn on a dime). Instead, a Character Controller performs collision detection to make sure your characters can slide along walls, walk up and down stairs, etc.
Character Controllers are not affected by forces but they can push Rigidbodies by applying forces to them from a script. Usually, all humanoid characters are implemented using Character Controllers.
Character Controllers are inherently unphysical, thus if you want to apply real physics - Swing on ropes, get pushed by big rocks - to your character you have to use a Rigidbody, this will let you use joints and forces on your character. Character Controllers are always aligned along the Y axis, so you also need to use a Rigidbody if your character needs to be able to change orientation in space (for example under a changing gravity). However, be aware that tuning a Rigidbody to feel right for a character is hard due to the unphysical way in which game characters are expected to behave. Another difference is that Character Controllers can slide smoothly over steps of a specified height, while Rigidbodies will not.
If you parent a Character Controller with a Rigidbody you will get a "Joint" like behavior.
コンポーネント詳細
物理特性制御
- Rigidbody - リジッドボディ がオブジェクトを物理特性制御下に置きます。
- Constant Force - リジッドボディに一定力を素早く加えるためのユーティリティ コンポーネント。 ロケットやその他の迅速な機能に最適です。
コライダ
- Sphere Collider - 球体形のオブジェクトに使用されます。
- Box Collider - 箱形のオブジェクトに使用されます。
- Capsule Collider - カプセル形 (両端が半球) のオブジェクトに使用されます。
- Mesh Collider - グラフィカルな mesh を取り、衝突形状として使用します。
- Physic Material - オブジェクトの物理的特定 (摩擦、跳ね返りなど) の微調整を可能にする設定を含みます。
ジョイント
- Hinge Joint - ドアのヒンジの作成に使用します。
- Spring Joint - ばねに似たジョイント。
- Fixed Joint - オブジェクトをまとめてロックします。
- Configurable Joint - ほぼあらゆる種類のジョイントの複雑な動作を作詞するのに使用します。
特殊な機能
- Character Controller と Character Joint - キャラクター コントローラーを作成するのに使用します。
- Wheel Collider - 地上車両用に特殊コライダ。
- Skinned Cloth - スキン クロスを作成するのに使用します。
- Interactive Cloth - インタラクティブ クロスの作成にしようされますが、シミュレートされる通常のクロスにすぎません。
class-BoxCollider
ボックスコライダは箱の型をした基本型コリジョンプリミティブです。

山積みされたボックスコライダ
プロパティ
| Material | 使用する物理マテリアル への参照。物理マテリアルによりコライダが他と衝突したときの物理挙動の条件が定義されます |
| Is Trigger | オンにすると、コライダはイベントのトリガーとなり、物理エンジンが無視されます |
| Size | コライダの大きさ(X、Y、Z方向) |
| Center | オブジェクトのローカル座標系におけるコライダの位置 |
詳細
ボックスコライダは様々な形の直方体に変形することが出来ます。ドア、壁、床、等々に最適です。さらにラグドールの胴体や車など乗り物の車体でも効果的です。言うまでもなく簡単な箱や壷にもピッタリです。

標準的なボックスコライダ
コライダはリジッドボディ(剛体)と連動してUnity上での物理挙動を実現します。リジッドボディというのは「オブジェクトを物理法則にしたがって動かす」一方で、コライダはオブジェクトが互いに衝突することを可能に」します。コライダはリッジトボディとは別にオブジェクトにアタッチする必要があります。コライダとリジットボディは両方アタッチされなくとも良いですが、コリジョンの結果オブジェクトを動かすためにはリジッドボディがアタッチされていることは必須です。
二つのコライダ間で衝突が発生し、かつ少なくとも一つのオブジェクトにリジッドボディがアタッチされている場合、 3種類の 衝突 メッセージが 発行されます。これらのイベントはスクリプト作成時にハンドリングすることが出来ますし、独自のビヘイビアを定義することが出来るようになるとともにビルトインのNVIDIA PhysXエンジンを使用するかしないか自由に決めることができます。
トリガー
コライダをトリガーとしてマーキングする別の方法としてインスペクタでIsTrigger属性をチェックボックスにてオンするという方法があります。これによりトリガーは物理エンジンから無視されるようになり、コライダの衝突が発生した場合3種類の トリガー メッセージが 発行されます。トリガーはゲームにおける別イベントを発動するに際して便利です(カットシーンやドアの自動開閉、チュートリアルメッセージの表示、等々。想像を働かせれば様々なパターンで応用できます)
留意しないといけないこととして、二つのトリガーがトリガーイベント衝突時にを発行するためには、片方のトリガーはリジッドボディを含む必要があります。同様にトリガーが通常のコライダと衝突するためには片方のトリガーがリジッドボディを含む必要があります。全てのコライダ種類の一覧についていは図表にまとめてありますので、本項の「応用」セクションの「衝突アクションマトリクス」を参照下さい。
摩擦係数と反射係数
摩擦係数(friction)、反射係数(bouncyness)、やわらかさ(softness)は 物理マテリアル の属性として定義されています。スタンダードアセット のなかに頻繁に使用される物理マテリアルが含まれています。使用する際はPhysic Materialドロップダウンボックスをクリックし、どれかを選択します(例 Ice)。物理マテリアル については自分でカスタムのものを作成することができ、摩擦係数などすべて調整することが出来ます。
複合コライダ
複合コライダはプリミティブなコライダの組み合わせによりひとつのコライダとしての挙動を示すものです。便利な場面としては複雑なメッシュをコライダでしようしたいが、Mesh Colliderを使用できないケースです。複合コライダを作成する際はコライダオブジェクトの子オブジェクトを作成し、それから各々の子オブジェクトにプリミティブなコライダを追加します。これにより各々のコライダを別々に容易に配置、回転、拡大縮小することが出来ます。

リアルリティのある複合コライダ
上記の図では、ガンモデルのゲームオブジェクトはリジッドボディがアタッチされてあり、子オブジェクトして複数のプリミティブなコライダを含みます。親のリジッドボディが力により動かされた場合、子コライダが追従して動きます。プリミティブなコライダは環境上にあるMesh Colliderと衝突し、親リジッドボディは自身に加えられた力の作用、子コライダがシーン上他のコライダとの衝突した作用、の双方を加味して軌道が変化します。
Mesh Collider同士は通常では衝突しませんが、Convexをオンにした場合のみ衝突することが出来ます。良くある方法として、動く全てのオブジェクトにはプリミティブなコライダを組み合わせ、動かない背景のオブジェクトにMesh Colliderを使います。
ヒント
- 複数のコライダをオブジェクトに追加するためには、子ゲームオブジェクトを作成し、コライダを各々にアタッチします。この方法でコライダは独立して動作させることが出来ます。
- シーンビューのギズモをみることでオブジェクトとコライダの大きさを確かめることができます
- コライダはオブジェクトと大きさを出来るかぎりマッチさせるのが良いです。均等でない大きさ(各々の軸方向での大きさが異なる)の場合、メッシュコライダでないかぎりオブジェクトと完全にマッチさせることは出来ません
- トランスフォームコンポーネントを通してオブジェクトを移動させつつ、衝突、トリガーのメッセージを受け取るためには動いているオブジェクトにリジッドボディをアタッチする必要があります。
応用
コライダの組み合わせ
Unity上で複数の異なる衝突の組み合わせがありえる。それぞれのゲームで追求する内容は異なるので、ゲームのタイプによって組み合わせの良し悪しが決まってくる。ゲームで物理挙動を使用している場合、基本的なコライダの種類を理解し、主な働き、使用例、他のオブジェクトとの相互作用について理解を深める必要がある。
スタティックコライダ
リジッドボディを含まないが、コライダを含むゲームオブジェクトについて考えます。これらのオブジェクトは動かない、あるいは動くとしてもわずかであることが望ましいです。これらは背景のオブジェクトとして最適である。リジッドボディと衝突したとしても動きません。
リジッドボディコライダ
リジッドボディとコライダ双方を含むゲームオブジェクトについて考えます。これらのオブジェクトは物理エンジンに影響を受け、加えられた力や衝突によって軌道が変化します。またコライダを含むゲームオブジェクトと衝突させることが出来ます。多くの場合は物理エンジンを使用したい場合の主要なコライダとなります。
キネマティック リジッドボディコライダ
IsKinematicがオンとなっているリジッドボディとコライダ双方を含みゲームオブジェクトについて考えます。このオブジェクトを動かすためには力を加えるのではなくトランスフォーム コンポーネントの値を書き換えて移動させます。スタティックコライダを共通点が多いがコライダを頻繁に動かしたい場合に役立ちます。その他、このオブジェクトを使用するのが適切なシナリオはいくつか考えられます。
このオブジェクトはスタティックコライダにトリガーイベントを発行したい場合に役立つ。トリガーはリジッドボディを含む必要があるためリジッドボディをアタッチしたうえでIsKinematicをオンに設定する。これによりオブジェクトが物理挙動の影響を受けず、必要なときにトリガーイベントを受け取ることが出来るようになる。
キネマティック リジッドボディは簡単にオンオフを切り替えることが出来る。これはラグドール作成に大いに役立ち、たとえばキャクラターがある場面まではアニメーションどおりに動作し、その後に衝突によって爆発や何らかのエフェクトを起こしその後はラグドールの動作をさせたい場合に役立つ。
リジッドボディを長時間動かさない場合、完全にスリープ状態とさせることができる。言い換えると物理挙動のアップデート処理のなかで値が更新されることはなく、位置もそのままとなる。キネマティックリジッドボディコライダを通常のリジッドボディコライダの外に移動する場合、スリープ状態は解除され物理挙動のアップデート処理が再び始まる。つまり動かしたいスタティックコライダが複数あり、その上に異なるオブジェクトを落としたい場合にはキネマティック リジッドボディコライダを使用すべきである。
衝突アクションマトリクス
衝突する2つのオブジェクトの設定によっては同時に複数のアクションが走る可能性がある。以下のチャートにより二つのオブジェクトが衝突する際の動作をアタッチされているコンポーネント等を基準に整理しました。いくつかの組み合わせにおいては片方のオブジェクトのみ衝突の影響を受けるので、原則として「オブジェクトにリジッドボディがなければ物理挙動もない」ということをよく頭に入れておく必要がある。
| 衝突により衝突メッセージを受け取るか? | ||||||
| Static Collider | Rigidbody Collider | Kinematic Rigidbody Collider | Static Trigger Collider | Rigidbody Trigger Collider | Kinematic Rigidbody Trigger Collider | |
| Static Collider | Y | |||||
| Rigidbody Collider | Y | Y | Y | |||
| Kinematic Rigidbody Collider | Y | |||||
| Static Trigger Collider | ||||||
| Rigidbody Trigger Collider | ||||||
| Kinematic Rigidbody Trigger Collider | ||||||
| 衝突によりトリガーメッセージは発行されるか? | ||||||
| Static Collider | Rigidbody Collider | Kinematic Rigidbody Collider | Static Trigger Collider | Rigidbody Trigger Collider | Kinematic Rigidbody Trigger Collider | |
| Static Collider | Y | Y | ||||
| Rigidbody Collider | Y | Y | Y | |||
| Kinematic Rigidbody Collider | Y | Y | Y | |||
| Static Trigger Collider | Y | Y | Y | Y | ||
| Rigidbody Trigger Collider | Y | Y | Y | Y | Y | Y |
| Kinematic Rigidbody Trigger Collider | Y | Y | Y | Y | Y | Y |
Layer-Based Collision Detection
Unity3.XではLayer-Based Collision Detectionが機能として追加されましたので、すべてのレイヤーの組み合わせにおいて、どのレイヤーの組み合わせでは衝突が発生するかオブジェクトに設定を行うことができます。詳細情報についてはここ をクリックし参照のこと。
Page last updated: 2012-11-09class-CapsuleCollider
Capsule Colliderは円柱で結合された 2つの半球で構成されます。これは、カプセル プリミティブと同じ形状です。

山積みされたCapsule Collider
プロパティ
| Material | 使用する物理マテリアル への参照。物理マテリアルによりコライダが他と衝突したときの物理挙動の条件が定義されます |
| Is Trigger | オンにすると、コライダはイベントのトリガーとなり、物理エンジンが無視されます |
| Radius | コライダのローカル座標系における半径 |
| Height | コライダの高さ |
| Direction | オブジェクトのローカル座標系における長辺方向の軸の向き。 |
| Center | オブジェクトのローカル座標系におけるコライダの位置 |
詳細
Capsule Colliderはあなたは、独立してカプセル・コライダーの半径、高さを調整することができます。主な用途であるキャラクターコントローラー での使用の他、複数のコライダを組み合わせて一般的でない形状にしたり、ポールとして使用することも可能です。

標準的なCapsule Collider
コライダはリジッドボディ(剛体)と連動してUnity上での物理挙動を実現します。リジッドボディというのは「オブジェクトを物理法則にしたがって動かす」一方で、コライダはオブジェクトが互いに衝突することを可能に」します。コライダはリッジトボディとは別にオブジェクトにアタッチする必要があります。コライダとリジットボディは両方アタッチされなくとも良いですが、コリジョンの結果オブジェクトを動かすためにはリジッドボディがアタッチされていることは必須です。
二つのコライダ間で衝突が発生し、かつ少なくとも一つのオブジェクトにリジッドボディがアタッチされている場合、 3種類の 衝突 メッセージが 発行されます。これらのイベントはスクリプト作成時にハンドリングすることが出来ますし、独自のビヘイビアを定義することが出来るようになるとともにビルトインのNVIDIA PhysXエンジンを使用するかしないか自由に決めることができます。
トリガー
コライダをトリガーとしてマーキングする別の方法としてインスペクタでIsTrigger属性をチェックボックスにてオンするという方法があります。これによりトリガーは物理エンジンから無視されるようになり、コライダの衝突が発生した場合3種類の トリガー メッセージが 発行されます。トリガーはゲームにおける別イベントを発動するに際して便利です(カットシーンやドアの自動開閉、チュートリアルメッセージの表示、等々。想像を働かせれば様々なパターンで応用できます)
留意しないといけないこととして、二つのトリガーがトリガーイベント衝突時にを発行するためには、片方のトリガーはリジッドボディを含む必要があります。同様にトリガーが通常のコライダと衝突するためには片方のトリガーがリジッドボディを含む必要があります。全てのコライダ種類の一覧についていは図表にまとめてありますので、本項の「応用」セクションの「衝突アクションマトリクス」を参照下さい。
摩擦係数と反射係数
摩擦係数(friction)、反射係数(bouncyness)、やわらかさ(softness)は 物理マテリアル の属性として定義されています。スタンダードアセット のなかに頻繁に使用される物理マテリアルが含まれています。使用する際はPhysic Materialドロップダウンボックスをクリックし、どれかを選択します(例 Ice)。物理マテリアル については自分でカスタムのものを作成することができ、摩擦係数などすべて調整することが出来ます。
複合コライダ
複合コライダはプリミティブなコライダの組み合わせによりひとつのコライダとしての挙動を示すものです。便利な場面としては複雑なメッシュをコライダでしようしたいが、Mesh Colliderを使用できないケースです。複合コライダを作成する際はコライダオブジェクトの子オブジェクトを作成し、それから各々の子オブジェクトにプリミティブなコライダを追加します。これにより各々のコライダを別々に容易に配置、回転、拡大縮小することが出来ます。

リアルリティのある複合コライダ
上記の図では、ガンモデルのゲームオブジェクトはリジッドボディがアタッチされてあり、子オブジェクトして複数のプリミティブなコライダを含みます。親のリジッドボディが力により動かされた場合、子コライダが追従して動きます。プリミティブなコライダは環境上にあるMesh Colliderと衝突し、親リジッドボディは自身に加えられた力の作用、子コライダがシーン上他のコライダとの衝突した作用、の双方を加味して軌道が変化します。
Mesh Collider同士は通常では衝突しませんが、Convexをオンにした場合のみ衝突することが出来ます。良くある方法として、動く全てのオブジェクトにはプリミティブなコライダを組み合わせ、動かない背景のオブジェクトにMesh Colliderを使います。
ヒント
- 複数のコライダをオブジェクトに追加するためには、子ゲームオブジェクトを作成し、コライダを各々にアタッチします。この方法でコライダは独立して動作させることが出来ます。
- シーンビューのギズモをみることでオブジェクトとコライダの大きさを確かめることができます
- コライダはオブジェクトと大きさを出来るかぎりマッチさせるのが良いです。均等でない大きさ(各々の軸方向での大きさが異なる)の場合、メッシュコライダでないかぎりオブジェクトと完全にマッチさせることは出来ません
- トランスフォームコンポーネントを通してオブジェクトを移動させつつ、衝突、トリガーのメッセージを受け取るためには動いているオブジェクトにリジッドボディをアタッチする必要があります。
応用
コライダの組み合わせ
Unity上で複数の異なる衝突の組み合わせがありえる。それぞれのゲームで追求する内容は異なるので、ゲームのタイプによって組み合わせの良し悪しが決まってくる。ゲームで物理挙動を使用している場合、基本的なコライダの種類を理解し、主な働き、使用例、他のオブジェクトとの相互作用について理解を深める必要がある。
スタティックコライダ
リジッドボディを含まないが、コライダを含むゲームオブジェクトについて考えます。これらのオブジェクトは動かない、あるいは動くとしてもわずかであることが望ましいです。これらは背景のオブジェクトとして最適である。リジッドボディと衝突したとしても動きません。
リジッドボディコライダ
リジッドボディとコライダ双方を含むゲームオブジェクトについて考えます。これらのオブジェクトは物理エンジンに影響を受け、加えられた力や衝突によって軌道が変化します。またコライダを含むゲームオブジェクトと衝突させることが出来ます。多くの場合は物理エンジンを使用したい場合の主要なコライダとなります。
キネマティック リジッドボディコライダ
IsKinematicがオンとなっているリジッドボディとコライダ双方を含みゲームオブジェクトについて考えます。このオブジェクトを動かすためには力を加えるのではなくトランスフォーム コンポーネントの値を書き換えて移動させます。スタティックコライダを共通点が多いがコライダを頻繁に動かしたい場合に役立ちます。その他、このオブジェクトを使用するのが適切なシナリオはいくつか考えられます。
このオブジェクトはスタティックコライダにトリガーイベントを発行したい場合に役立つ。トリガーはリジッドボディを含む必要があるためリジッドボディをアタッチしたうえでIsKinematicをオンに設定する。これによりオブジェクトが物理挙動の影響を受けず、必要なときにトリガーイベントを受け取ることが出来るようになる。
キネマティック リジッドボディは簡単にオンオフを切り替えることが出来る。これはラグドール作成に大いに役立ち、たとえばキャクラターがある場面まではアニメーションどおりに動作し、その後に衝突によって爆発や何らかのエフェクトを起こしその後はラグドールの動作をさせたい場合に役立つ。
リジッドボディを長時間動かさない場合、完全にスリープ状態とさせることができる。言い換えると物理挙動のアップデート処理のなかで値が更新されることはなく、位置もそのままとなる。キネマティックリジッドボディコライダを通常のリジッドボディコライダの外に移動する場合、スリープ状態は解除され物理挙動のアップデート処理が再び始まる。つまり動かしたいスタティックコライダが複数あり、その上に異なるオブジェクトを落としたい場合にはキネマティック リジッドボディコライダを使用すべきである。
衝突アクションマトリクス
衝突する2つのオブジェクトの設定によっては同時に複数のアクションが走る可能性がある。以下のチャートにより二つのオブジェクトが衝突する際の動作をアタッチされているコンポーネント等を基準に整理しました。いくつかの組み合わせにおいては片方のオブジェクトのみ衝突の影響を受けるので、原則として「オブジェクトにリジッドボディがなければ物理挙動もない」ということをよく頭に入れておく必要がある。
| 衝突により衝突メッセージを受け取るか? | ||||||
| Static Collider | Rigidbody Collider | Kinematic Rigidbody Collider | Static Trigger Collider | Rigidbody Trigger Collider | Kinematic Rigidbody Trigger Collider | |
| Static Collider | Y | |||||
| Rigidbody Collider | Y | Y | Y | |||
| Kinematic Rigidbody Collider | Y | |||||
| Static Trigger Collider | ||||||
| Rigidbody Trigger Collider | ||||||
| Kinematic Rigidbody Trigger Collider | ||||||
| 衝突によりトリガーメッセージは発行されるか? | ||||||
| Static Collider | Rigidbody Collider | Kinematic Rigidbody Collider | Static Trigger Collider | Rigidbody Trigger Collider | Kinematic Rigidbody Trigger Collider | |
| Static Collider | Y | Y | ||||
| Rigidbody Collider | Y | Y | Y | |||
| Kinematic Rigidbody Collider | Y | Y | Y | |||
| Static Trigger Collider | Y | Y | Y | Y | ||
| Rigidbody Trigger Collider | Y | Y | Y | Y | Y | Y |
| Kinematic Rigidbody Trigger Collider | Y | Y | Y | Y | Y | Y |
Layer-Based Collision Detection
Unity3.XではLayer-Based Collision Detectionが機能として追加されましたので、すべてのレイヤーの組み合わせにおいて、どのレイヤーの組み合わせでは衝突が発生するかオブジェクトに設定を行うことができます。詳細情報についてはここ をクリックし参照のこと。
Page last updated: 2012-11-26class-CharacterController
Character Controller は、Rigidbody 物理特性を使用しない 3 人称または 1 人称のプレイヤー制御も主に使用されます。

キャラクタ コントローラ
プロパティ
| Height | キャラクターの Capsule Collider の高さ。 この値を変更すると、正負の方向で Y 軸に沿ってコライダが拡大縮小されます。 |
| Radius | カプセル コライダのローカルの半径。 これは基本的にコライダの幅となります。 |
| Slope Limit | コライダが、示された値以下の勾配のみを登るよう制限します。 |
| Step Offset | 示された値よりも地面に近い場合にのみ、キャラクターが階段を登ります。 |
| Min Move Distance | キャラクターが示された値未満で動こうとしても、全く移動しません。 ジッタを減らすのに使用できます。 ほとんどの場合、この値は 0 のままにしておいてください。 |
| Skin width | 2 つのコライダが、Skin Width と同じ深さで互いに突き抜けます。 Skin Wdith が大きくなると、ジッタが減ります。 Skin Width が低いと、キャラクターが動けなくなる場合があります。 Radius の 10% にこの値を設定しておくとよいでしょう。 |
| Center | これにより、ワールド スペースでカプセル コライダが相殺されますが、キャラクターがどのように旋回するかには影響しません。 |
詳細
従来の Doom 形式の 1 人称制御は物理的に現実的ではありません。 キャラクターは、時速 90 マイルで走り、すぐに停止し、急旋回します。 これはあまり現実的でないため、剛体と物理特性を使用し、この動作を作成するのは、非実践的で、感じが悪くなります。 これを解決するため、専用のキャラクター コントローラを使用します。 これは単にカプセル型の Collider で、スクリプトからある方向に移動するよう指示します。 コントローラは、移動を実行しますが、衝突により制約されます。 壁に沿って滑り、(Step Offsetよりも低い場合)階段を登り、Slope Limit内で勾配を登ります。
コントローラは、それ自体への力には反応せず、自動的に剛体を離しません。
キャラクターコントローラでリジッドボディまたはオブジェクトを離したい場合は、スクリプティングを通じて、OnControllerColliderHit()関数を介して衝突するオブジェクトに力を適用できます。
一方、プレイヤー キャラクターが物理特性の影響をうけるようにしたい場合は、キャラクター コントローラの代わりに、Rigidbody を使用した方がよいでしょう。
キャラクターの微調整
キャラクターのメッシュに合うように、HeightとRadiusを修正できます。 人間のキャラクターの場合は、約 2 メートルを常に使用することをお勧めします。 また、回転軸がキャラクターの中心でない場合に、カプセルのCenterを修正できます。
Step Offsetもこれに影響するので、2 メートル サイズの人間の場合は、この値を 0.1 と 0.4 の間にしてください。
Slope Limitは小さすぎないようにしてください。 多くの場合、90°を使用されるのがベストです。 キャラクター コントローラはカプセルの形状のために、壁を登ることができません。
動きを停止させない
Skin Widthは、キャラクター コントローラを調整する際に、正しく機能させるための最も重要なプロパティの 1 つです。 キャラクターが動かなくなった場合、多くはSkin Widthが小さすぎることが原因です。 Skin Widthにより、オブジェクトが若干コントローラを突き抜けますが、ジッタが減り、動作停止を回避できます。
Skin Widthを 0.01 より大きい値で維持し、Radiusを 10% より大きい値にしておくとよいでしょう。
Min Move Distanceは 0 にしておくことをお勧めします。
キャラクター コントローラのスクリプト リファレンス here を参照してください。
当社ウェブサイトの Resources エリアから、事前設定されたアニメート化および移動するキャラクター コントローラを示すプロジェクトのサンプルをダウンロードできます。
ヒント
- キャラクターが頻繁に動作停止する場合は、Skin Widthを調整してみてください。
- 自身でスクリプトを記述する場合、キャラクター コントローラが物理特性を使用して、オブジェクトに影響する場合があります。
- キャラクター コントローラは、物理特性を通じて、オブジェクトの影響を受けることはありません。
- 注意:Inspectorでキャラクターコントローラープロパティを変更することはシーン内でコントローラーを再生成することになります。ですから今あるトリガーは全て関連がいったん切れてしまいますし、再びコントローラーが動くまでOnTriggerEntered messagesを受け取ることも出来ません。
class-CharacterJoint
Character Joint は、主にラグドール効果に使用されます。 これは、各軸でジョイントを制限できるボール ソケット型の拡張ジョイントです。
ラグドールの設定をしたい場合は、Ragdoll Wizard を参照してください。

ラグドールのキャラクター ジョイント
プロパティ
| Connected Body | ジョイントが依存する Rigidbody へのオプションの参照。 設定しないと、ジョイントはワールドに接続します。 |
| Anchor | ジョイントがその周辺で回転する、GameObject のローカルなスペースにおける点。 |
| Axis | ツイスト軸。 オレンジ色のギズモの円錐で表示されます。 |
| Swing Axis | スイング軸。 緑色のギズモの円錐で表示されます。 |
| Low Twist Limit | ジョイントの下限。 |
| High Twist Limit | ジョイントの上限。 |
| Swing 1 Limit | 定義されたSwing Axis周辺の下限。 |
| Swing 2 Limit | 定義されたSwing Axis周辺の上限。 |
| Break Force | このジョイントが分解するのに適用される必要のある力。 |
| Break Torque | このジョイントが分解するのに適用される必要のあるトルク。 |
詳細
キャラクター ジョイントは、ユニバーサル ジョイントの場合と同様、動きを制限する多くの機能を提供します。
ツイスト軸 (オレンジ色のギズモで表示)により、上下限 (限界角度は、開始位置に対して測定されます)を°で指定できるため、制限を最も制御できます。 Low Twist Limit->Limitでの -30 の値およびHigh Twist Limit->Limitでの 60 は、-30° と 60°の間で、ツイスト軸 (オレンジ色のギズモ) 周辺での回転を制限します。
Swing 1 Limitは、スイング軸 (緑色の軸) 周辺での回転を制限します。 限界角度は、対称的です。 従って、例えば、30 の値は、-30 と 30 の間で回転を制限します。
Swing 2 Limit軸にはギズモはありませんが、この軸は他の 2 つの軸に対して直角です。 前述の軸のように、制限は対称的であるため、例えば、40 の値は、-40 と 40の間で回転を制限します。
ジョイントの分解
分解力や分解トルクプロパティを使用して、ジョイントの強さに上限を設定できます。 これらが無限ではなく、上限より大きい力 / トルクがオブジェクトに適用される場合、その固定ジョイントは分介されず、またその制約によって限定されることはありません。
ヒント
- Connected Bodyが機能するよう、ジョイントに割り当てる必要はありません。
- キャラクター ジョイントは、リジッドボディを追加する必要があります。
class-ConfigurableJoint
Configurable Joint は、極めて自由にカスタマイズできます。 PhysX のジョイント関係のプロパティをすべて露出するため、その他すべての種類のジョイントに似た動作を作成できます。

設定可能なジョイントのプロパティ
詳細
設定可能なジョイントは、 移動 /回転の制限と移動 / 回転の加速という 2 つの基本的な機能を実行できます。 これらの機能は、中間依存のプロパティに依存します。 達成しようとしている正確な動作を作成するには、ある程度の実験が必要な場合があります。 実験をできる限り簡単にするための、ジョイントの機能の概要を提供します。
移動 / 回転の制限
軸および動作のタイプごとに制限を指定します。 XMotion、YMotion、ZMotionで、軸に沿った移転を定義できます。 Angular XMotion、Angular YMotion、Angular ZMotionで、軸に沿った回転を定義できます。 これらのプロパティに対して、それぞれFree (制限なし)、Limited (定義できる制限に基づいて制限)、Locked (移動なしに制限)を設定できます。
動作の制限
いずれかのMovementプロパティがLimitedに制限されている場合、その軸に対して、移動の制限を定義できます。 これは、Limitプロパティのいずれかの値を変更することで行います。
移動 (角のない) の内の移転場合、Linear Limitプロパティは、原点からオブジェクトが移動できる最大距離を定義します。 Limitedに設定されたMotionプロパティでの移転は、Linear Limit->Limitに従って制限されます。 このLimitプロパティは、オブジェクトに対する軸周辺の境界の設定と考えてください。
Bouncyness、Spring、Damperは、LimitedMotion軸のいずれかで、Limitに達した際のオブジェクトの動作を定義します。 これらの値をすべて 0 に設定すると、境界に達した時に、オブジェクトをすぐに動きを止めます。 Bouncynessは、境界からオブジェクトを跳ね返らせます。 SpringとDamperは、跳躍力を使用して、オブジェクトを境界に引き戻します。 これにより、境界が柔らかくなるため、オブジェクトはすぐに停止するのではなく、境界を通過し、引き戻されることができるようになります。
回転の制限
回転の制限は、動作の制限とほとんど同じです。 ただし、3 つのAngular Motionプロパティがすべて、異なるAngular Limitプロパティに対応している点が違います。 3 つの軸に沿った移転制限は、Linear Limitプロパティによって定義され、3 津の軸それぞれに沿った回転制限は、軸ごとの個々のAngular Limitで定義されます。
Low Angular XLimitとHigh Angular XLimitを定義できるため、Angular XMotion制限は、最も厳しい制限です。 そのため、-35°の低回転制限と 180°の高回転制限を定義したい場合にこれを行うことができます。 Y および Z 軸の場合、Angular YLimitまたはAngular ZLimitのLimitプロパティで一緒に設定された低および高回転制限は同一になります 。
回転制限でのオブジェクトの動作に関して、動作の制限から同じルールがここで適用されます。
移動 / 回転の加速
特定の位置 /回転、または速度 / 角速度に対してオブジェクトを移動させる点に関して、オブジェクトの移動または回転を指定します。 このシステムは、前進させたいTarget値を定義し、オブジェクトをその目標に向かって前進させる加速度を提供するために、Driveを使用することで機能します。 各Driveには、Modeがあります。これは、オブジェクトの前進の目標となるTargetを定義するのに使用します。
転移の加速
XDrive、YDriveとZDriveプロパティは、軸に沿ったオブジェクトの移動を開始するプロパティです。 各DriveのModeは、Target PositionまたはTarget Velocityまたはその両方に移動するかどうかを定義します。 例えば、、XDriveのモードがPositionに設定されると、オブジェクトは、Target Position->Xの値に移動しようとします。
DriveがそのModeでPositionを使用すると、そのPosition Spring値がオブジェクトがTarget Positionに対してどのように移動するかを定義します。 DriveがそのModeでVelocityを使用すると、そのMaximum Force値がオブジェクトがTarget Positionに対してどのように加速するかを定義します。
回転の加速
回転の加速には、 Angular XDrive、Angular YZDriveとSlerp Driveがあり、転移Drivesと同じように機能します。 しかし、1 つ大きな違いがあります。 Slerp Driveは、Angular Drive機能とは動作が異なります。 そのため、Rotation Drive Modeから 1 つを選択して、Angular Drivesの両方またはSlerp Driveのいずれかを使用するよう選択できます。 一度に両方を使用することはできません。
プロパティ
| Anchor | ジョイントの中心が定義される点。 物理特性ベースシミュレーションは、この点を計算の中心として使用します。 | |
| Axis | 物理特性シミュレーションに基づいて、オブジェクトの自然な回転を定義するローカルの軸。 | |
| Secondary Axis | AxisとSecondary Axisは共に、ジョイントのローカルな座標系を定義します。 第 3 の軸は、他の 2 つに対して、直角になるよう設定されます。 | |
| XMotion | Linear Limitに応じて、X 軸に沿った移動を、Free、完全 Locked または Limited にすることができます。 | |
| YMotion | Linear Limitに応じて、Y 軸に沿った移動を、Free、完全 Locked または Limited にすることができます。 | |
| YMotion | Linear Limitに応じて、Y 軸に沿った移動を、Free、完全 Locked または Limited にすることができます。 | |
| Angular XMotion | LowまたはHighのAngular XLimitに応じて、X 軸周辺の回転を、Free、完全 Locked または Limited にすることができます。 | |
| Angular YMotion | Angular YLimitに応じて、Y 軸周辺の回転を、Free、完全 Locked または Limited にすることができます。 | |
| Angular ZMotion | Angular ZLimitに応じて、Z 軸周辺の回転を、Free、完全 Locked または Limited にすることができます。 | |
| Linear Limit | ジョイントの原点からの距離に基づいた、移動制限を定義する境界。 | |
| Limit | 原点から境界の壁までの単位での距離。 | |
| Bouncyness | Limitに到達した時に、オブジェクトに適用される跳ね返り力の量。 | |
| Spring | オブジェクトをLimitに戻すのに適用される力の強さ。 0 以外の値は、暗黙的に境界を柔らかくします。 | |
| Damper | Springに対する抵抗の強さ。 | |
| Low Angular XLimit | 原点の回転からのデルタに基づいた、低い回転制限を定義する境界。 | |
| Limit | オブジェクトの回転が下回らない回転の限界 (単位: °) | |
| Bouncyness | 回転がLimitに到達した時に、オブジェクトに適用される跳ね返りトルクの量。 | |
| Spring | オブジェクトをLimitに戻すのに適用される力の強さ。 0 以外の値は、暗黙的に境界を柔らかくします。 | |
| Damper | Springに対する抵抗の強さ。 | |
| High Angular XLimit | 原点の回転からのデルタに基づいた、高い回転制限を定義する境界。 | |
| Limit | オブジェクトの回転が超えない回転の限界 (単位: °) | |
| Bouncyness | 回転がLimitに到達した時に、オブジェクトに適用される跳ね返りトルクの量。 | |
| Spring | オブジェクトをLimitに戻すのに適用される力の強さ。 0 以外の値は、暗黙的に境界を柔らかくします。 | |
| Damper | Springに対する抵抗の強さ。 | |
| Angular YLimit | 原点の回転からのデルタに基づいた、回転制限を定義する境界。 | |
| Limit | オブジェクトの回転が超えない回転の限界 (単位: °) | |
| Bouncyness | 回転がLimitに到達した時に、オブジェクトに適用される跳ね返りトルクの量。 | |
| Spring | オブジェクトをLimitに戻すのに適用されるトルクの強さ。 0 以外の値は、暗黙的に境界を柔らかくします。 | |
| Damper | Springに対する抵抗の強さ。 | |
| Angular ZLimit | 原点の回転からのデルタに基づいた、回転制限を定義する境界。 | |
| Limit | オブジェクトの回転が超えない回転の限界 (単位: °) | |
| Bouncyness | 回転がLimitに到達した時に、オブジェクトに適用される跳ね返りトルクの量。 | |
| Spring | オブジェクトをLimitに戻すのに適用される力の強さ。 0 以外の値は、暗黙的に境界を柔らかくします。 | |
| Damper | Springに対する抵抗の強さ。 | |
| Target Position | ジョイントが移動する希望の位置。 | |
| Target Velocity | ジョイントが移動する希望の速度。 | |
| XDrive | ローカルの X 軸に沿ってジョイントの移動がどのように動作するかの定義。 | |
| Mode | 次のプロパティをTarget Position、Target Velocityまたはその両方に依存させるように設定します。 | |
| Position Spring | 定義した方向に引っ張られるゴムバンドの強さ。 ModeがPositionを含む場合にのみ使用されます。 | |
| Position Damper | Position Springに対する抵抗の強さ。 ModeがPositionを含む場合にのみ使用されます。 | |
| Maximum Force | 定義した方向にオブジェクトを押すのに適用される力の量。 ModeがVelocityを含む場合にのみ使用されます。 | |
| YDrive | ローカルの Y 軸に沿ってジョイントの移動がどのように動作するかの定義。 | |
| Mode | 次のプロパティをTarget Position、Target Velocityまたはその両方に依存させるように設定します。 | |
| Position Spring | 定義した方向に引っ張られるゴムバンドの強さ。 ModeがPositionを含む場合にのみ使用されます。 | |
| Position Damper | Position Springに対する抵抗の強さ。 ModeがPositionを含む場合にのみ使用されます。 | |
| Maximum Force | 定義した方向にオブジェクトを押すのに適用される力の量。 ModeがVelocityを含む場合にのみ使用されます。 | |
| ZDrive | ローカルの Z 軸に沿ってジョイントの移動がどのように動作するかの定義。 | |
| Mode | 次のプロパティをTarget Position、Target Velocityまたはその両方に依存させるように設定します。 | |
| Position Spring | 定義した方向に引っ張られるゴムバンドの強さ。 ModeがPositionを含む場合にのみ使用されます。 | |
| Position Damper | Position Springに対する抵抗の強さ。 ModeがPositionを含む場合にのみ使用されます。 | |
| Maximum Force | 定義した方向にオブジェクトを押すのに適用される力の量。 ModeがVelocityを含む場合にのみ使用されます。 | |
| Target Rotation | これは四元数です。 ジョイントが回転する希望の回転を定義します。 | |
| Target Angular Velocity | これはVector3です。 ジョイントが回転する希望の角速度を定義します。 | |
| Rotation Drive Mode | X & YZまたはSlerp Driveのいずれかでオブジェクトの回転を制御します。 | |
| Angular XDrive | ローカルの X 軸周辺でジョイントの回転がどのように動作するかの定義。 Rotation Drive ModeがSwing & Twistを含む場合にのみ使用されます。 | |
| Mode | 次のプロパティをTarget Position、Target Angular Velocityまたはその両方に依存させるように設定します。 | |
| Position Spring | 定義した方向に引っ張られるゴムバンドの強さ。 ModeがPositionを含む場合にのみ使用されます。 | |
| Position Damper | Position Springに対する抵抗の強さ。 ModeがPositionを含む場合にのみ使用されます。 | |
| Maximum Force | 定義した方向にオブジェクトを押すのに適用される力の量。 ModeがVelocityを含む場合にのみ使用されます。 | |
| Angular YZDrive | ローカルの Y 軸と Z 軸周辺でジョイントの回転がどのように動作するかの定義。 Rotation Drive ModeがSwing & Twistを含む場合にのみ使用されます。 | |
| Mode | 次のプロパティをTarget Position、Target Angular Velocityまたはその両方に依存させるように設定します。 | |
| Position Spring | 定義した方向に引っ張られるゴムバンドの強さ。 ModeがPositionを含む場合にのみ使用されます。 | |
| Position Damper | Position Springに対する抵抗の強さ。 ModeがPositionを含む場合にのみ使用されます。 | |
| Maximum Force | 定義した方向にオブジェクトを押すのに適用される力の量。 ModeがVelocityを含む場合にのみ使用されます。 | |
| Slerp Drive | すべてのローカルな軸周辺でジョイントの回転がどのように動作するかの定義。 Rotation Drive ModeがSlerp Onlyを含む場合にのみ使用されます。 | |
| Mode | 次のプロパティをTarget Position、Target Angular Velocityまたはその両方に依存させるように設定します。 | |
| Position Spring | 定義した方向に引っ張られるゴムバンドの強さ。 ModeがPositionを含む場合にのみ使用されます。 | |
| Position Damper | Position Springに対する抵抗の強さ。 ModeがPositionを含む場合にのみ使用されます。 | |
| Maximum Force | 定義した方向にオブジェクトを押すのに適用される力の量。 ModeがVelocityを含む場合にのみ使用されます。 | |
| Projection Mode | オブジェクトが | オブジェクトが停止しすぎている場合に、制限された位置にオブジェクトを戻して、追跡するプロパティ。 |
| Projection Distance | 許容可能な位置にオブジェクトが跳ね返る前に超える必要があるConnected Bodyからの距離。 | |
| Projection Angle | 許容可能な位置にオブジェクトが跳ね返る前に超える必要があるConnected Bodyからの角度の差。 |
| Break Force | この数字以上の Force 値が適用されると、ジョイントが破壊されます。 |
| Break Torque | この数字以上の Torque 値が適用されると、ジョイントが破壊されます。 |
class-ConstantForce
Constant Force は、Rigidbody に一定力を素早く加えるためのユーティリティです。 これは、高い速度で始めたくないが、代わりに加速していく場合に、ロケットのような 1 回限りのオブジェクトに最適です。

一定の力で前進するロケット
プロパティ
| Force | ワールド空間で適用される力のベクトル。 |
| Relative Force | オブジェクトのローカル空間で適用される力のベクトル。 |
| Torque | ワールド空間で適用されるトルクのベクトル。 オブジェクトが、このベクトル周辺で回転し始めます。 ベクトルが長いほど、回転が速くなります。 |
| Relative Torque | ローカル空間で適用されるトルクのベクトル。 オブジェクトが、このベクトル周辺で回転し始めます。 ベクトルが長いほど、回転が速くなります。 |
詳細
加速しながら前進するロケットを作成するには、Relative Forceを正の Z 軸に沿って設定します。 次に、剛体のDragプロパティを使用して、一定の最大速度を超えないようにします (ドラッグが高くなるほど、最大速度が下がります)。 剛体で、ロケットが常に軌道上にあるよう、重力をオフにします。
ヒント
- オブジェクトを上に流すには、正の Y 値のあるForceプロパティを持つ一定力を追加します。
- オブジェクトを上に飛ばすには、正の Z 値のあるRelative Forceプロパティを持つ一定力を追加します。
class-FixedJoint
Fixed Joints は、オブジェクトの動きを別のオブジェクトに依存するよう制限します。 これは、Parenting に若干似ていますが、Transform 階層ではなく、物理特性を通じて実施されます。 パレンディングなしで、簡単に分解したいオブジェクトがある場合や、2 つのオブジェクトの動きを繋げたいシナリオで使用するのに最適です。

固定ジョイント Inspector'
プロパティ
| Connected Body | ジョイントが依存するリジッドボディへのオプションの参照。 設定しないと、ジョイントはワールドに接続します。 |
| Break Force | このジョイントが分解するのに適用される必要のある力。 |
| Break Torque | このジョイントが分解するのに適用される必要のあるトルク。 |
詳細
ゲーム内で、オブジェクトを完全にまたは一時的に結合したいシナリオがあることがあります。 固定ジョイントは、必要な効果を達成するのにオブジェクトの階層で変更を記述する必要がないため、こうしたシナリオに使用するのに適した Component です。 矛盾点は、固定ジョイントを使用する任意のオブジェクトに Rigidbody を使用する必要があることです。
例えば、粘着手榴弾を使用したい場合、別のリジッドボディ (敵など) との衝突を検出するスクリプトを記述し、固定ジョイント自体をそのリジッドボディに追加する固定ジョイントを作成できます。 これで、敵が動くと、ジョイントが手榴弾にくっついたままになります。
ジョイントの分解
分解力や分解トルクプロパティを使用して、ジョイントの強さに上限を設定できます。 これらが無限ではなく、上限より大きい力 / トルクがオブジェクトに適用される場合、その固定ジョイントは分介されず、またその制約によって限定されることはありません。
ヒント
- Connected Bodyが機能するよう、ジョイントに割り当てる必要はありません。
- 固定ジョイントには、リジッドボディが必要です。
class-HingeJoint
Hinge Joint は、2 つの Rigidbodies をグループ化し、互いにヒンジで連結されているかのように動くよう制約します。 ドアに最適ですが、鎖や振り子などをモデル化するのにも使用できます。

ヒンジ ジョイント Inspector'
プロパティ
| Connected Body | ジョイントが依存するリジッドボディへのオプションの参照。 設定しないと、ジョイントはワールドに接続します。 |
| Anchor | ボディが揺れる中心となる軸の位置。 この位置はローカルなスペースで定義されます。 |
| Axis | ボディが揺れる中心となる軸の方向。 この方向はローカルなスペースで定義されます。 |
| Use Spring | スプリングは、リジッドボディをその連結されたボディと比較して、一定の角度に到達させます。 |
| Spring | Use Springを有効にした場合に使用されるスプリングのプロパティ。 |
| Spring | オブジェクトが前述の位置に移動するのに出す力。 |
| Damper | この値が高いほど、オブジェクトの速度は低下します。 |
| Target Position | スプリングの対象角度。 スプリングは、°で測定されたこの角度に向けて引っ張られます。 |
| Use Motor | モーターはオブジェクトを回転させます。 |
| Motor | Use Motorを有効にした場合に使用されるモーターのプロパティ。 |
| Target Velocity | オブジェクトが達成しようとする速度。 |
| Force | オブジェクトが前述の速度を達成するのに適用される力。 |
| Free Spin | 有効にすると、モーターは回転にブレーキをかけるのに使用されず、加速にのみ使用されます。 |
| Use Limits | 有効にすると、MinとMax値内にヒンジの角度が制限されます。 |
| Limits | Use Limitsを有効にした場合に使用される制限のプロパティ。 |
| Min | 回転が到達できる最低角度。 |
| Max | 回転が到達できる最高角度。 |
| Min Bounce | 最小の停止に到達した際にオブジェクトが跳ね返る量。 |
| Max Bounce | 最大の停止に到達した際にオブジェクトが跳ね返る量。 |
| Break Force | このジョイントが分解するのに適用される必要のある力。 |
| Break Torque | このジョイントが分解するのに適用される必要のあるトルク。 |
詳細
1 つのヒンジを GameObject に適用する必要があります。 このヒンジは、Anchorプロパティで指定した点で回転し、指定したAxisプロパティ周辺で移動します。 ジョイントのConnected BodyプロパティにGameObject を割り当てる必要はありません。 ジョイントの Transform を追加したオブジェクトのトランスフォームに依存させたい場合にのみ、GameObject をConnected Bodyプロパティに割り当てる必要があります。
ドアのヒンジがどのように機能するかを考えましょう。 この場合のAxisは上で、Y 軸に沿って正になります。 Anchorは、ドアと壁の間の交差部のどこかに置かれます。 ジョイントは、デフォルトでワールドに連結されるので、壁をConnected Bodyに割当てる必要はありません。
次は、ドギー ドアのヒンジについて考えましょう。 ドギー ドアのAxisは横で、相対的な X 軸に沿って正になります。 メイン ドアをConnected Bodyに割当てる必要があるため、ドギー ドアのヒンジは、メイン ドアのリジッドボディに依存します。
鎖
複数のヒンジ ジョイントを連結して、鎖を作成することも出来ます。 鎖の各連結部分にジョイントを追加して、次の連結部をConnected Bodyとして追加します。
ヒント
- Connected Bodyが機能するよう、ジョイントに割り当てる必要はありません。
- 動的なダメージ システムを作成するには、Break Forceを使用します。 プレイヤーが、ロケット ランチャーで爆破する、あるいは車で突っ込んで、ドアとヒンジを切り離すことができるので非常に便利です。
- Spring、MotorおよびLimitsプロパティにより、ジョイントの動作を微調整できます。
class-MeshCollider
Mesh Colliderはメッシュアセット を取り、そのメッシュに基づいてコライダを構築します。複雑なメッシュの場合、プリミティブを使用した衝突検出するよりも遥かに正確です。ConvexがオンとなっているMesh Colliderは、他のMesh Colliderと衝突することができます。

階段状の物体で使用される'Mesh Collider
プロパティ
| Material | 使用する物理マテリアル への参照。物理マテリアルによりコライダが他と衝突したときの物理挙動の条件が定義されます |
| Is Trigger | オンにすると、コライダはイベントのトリガーとなり、物理エンジンが無視されます |
| Mesh | 衝突判定に使用するメッシュ(への参照)。 |
| Smooth Sphere Collisions | オンのとき、衝突メッシュの法線を滑らかにする。球が滑らかな表面(例 ある地形上を滑らかに移動させるため、地形のエッジを立てずにを転がす。)の上を転がす場合、有効にすべきです。 |
| Convex | オンにした場合、MeshColliderは他のMeshColliderと衝突します。ConvexオンのMeshColliderは三角形面数の上限255に制限されています。 |
詳細
MeshColliderの衝突判定は形状をゲームオブジェクトにアタッチされたMesh から構築されており、アタッチされたTransform のプロパティを読み込んでPositionとScaleを正しく設定します。
衝突メッシュは、バックフェースカリングを使用しています。オブジェクトが表示上バックフェースカリングされたメッシュと衝突した場合、物理的に衝突することもありません。
MeshColliderを使用した場合、いくつかの制限があります。通常、2つのMeshColliderは互いに衝突することができません。すべてのMeshColliderはどのようなプリミティブコライダと衝突することができます。メッシュのConvexがオンの場合、他のMeshColliderと衝突することができます。
コライダはリジッドボディ(剛体)と連動してUnity上での物理挙動を実現します。リジッドボディというのは「オブジェクトを物理法則にしたがって動かす」一方で、コライダはオブジェクトが互いに衝突することを可能に」します。コライダはリッジトボディとは別にオブジェクトにアタッチする必要があります。コライダとリジットボディは両方アタッチされなくとも良いですが、コリジョンの結果オブジェクトを動かすためにはリジッドボディがアタッチされていることは必須です。
二つのコライダ間で衝突が発生し、かつ少なくとも一つのオブジェクトにリジッドボディがアタッチされている場合、 3種類の 衝突 メッセージが 発行されます。これらのイベントはスクリプト作成時にハンドリングすることが出来ますし、独自のビヘイビアを定義することが出来るようになるとともにビルトインのNVIDIA PhysXエンジンを使用するかしないか自由に決めることができます。
トリガー
コライダをトリガーとしてマーキングする別の方法としてインスペクタでIsTrigger属性をチェックボックスにてオンするという方法があります。これによりトリガーは物理エンジンから無視されるようになり、コライダの衝突が発生した場合3種類の トリガー メッセージが 発行されます。トリガーはゲームにおける別イベントを発動するに際して便利です(カットシーンやドアの自動開閉、チュートリアルメッセージの表示、等々。想像を働かせれば様々なパターンで応用できます)
留意しないといけないこととして、二つのトリガーがトリガーイベント衝突時にを発行するためには、片方のトリガーはリジッドボディを含む必要があります。同様にトリガーが通常のコライダと衝突するためには片方のトリガーがリジッドボディを含む必要があります。全てのコライダ種類の一覧についていは図表にまとめてありますので、本項の「応用」セクションの「衝突アクションマトリクス」を参照下さい。
摩擦係数と反射係数
摩擦係数(friction)、反射係数(bouncyness)、やわらかさ(softness)は 物理マテリアル の属性として定義されています。スタンダードアセット のなかに頻繁に使用される物理マテリアルが含まれています。使用する際はPhysic Materialドロップダウンボックスをクリックし、どれかを選択します(例 Ice)。物理マテリアル については自分でカスタムのものを作成することができ、摩擦係数などすべて調整することが出来ます。
ヒント
- Convexがオンとなっていない限り、Mesh Colliderは、他のMesh Colliderと互いに衝突することができません。したがって、背景における物体のような背景のオブジェクトにも有用である。
- ConvexオンのMeshColliderは三角形面数の上限255です。
- 物理挙動のあるオブジェクトにおいて、プリミティブなコライダのほうがより計算コストがかからない。
- GameObjectにMesh Colliderをアタッチした場合、ゲームオブジェクトのレンダリングに使用されているメッシュがMeshプロパティにデフォルト設定される。異なるメッシュを割り当てれば、変更することができます。
- 複数のコライダをオブジェクトに追加するためには、子ゲームオブジェクトを作成し、コライダを各々にアタッチします。この方法でコライダは独立して動作させることが出来ます。
- シーンビューのギズモをみることでオブジェクトとコライダの大きさを確かめることができます
- コライダはオブジェクトと大きさを出来るかぎりマッチさせるのが良いです。均等でない大きさ(各々の軸方向での大きさが異なる)の場合、メッシュコライダでないかぎりオブジェクトと完全にマッチさせることは出来ません
- トランスフォームコンポーネントを通してオブジェクトを移動させつつ、衝突、トリガーのメッセージを受け取るためには動いているオブジェクトにリジッドボディをアタッチする必要があります。
応用
コライダの組み合わせ
Unity上で複数の異なる衝突の組み合わせがありえる。それぞれのゲームで追求する内容は異なるので、ゲームのタイプによって組み合わせの良し悪しが決まってくる。ゲームで物理挙動を使用している場合、基本的なコライダの種類を理解し、主な働き、使用例、他のオブジェクトとの相互作用について理解を深める必要がある。
スタティックコライダ
リジッドボディを含まないが、コライダを含むゲームオブジェクトについて考えます。これらのオブジェクトは動かない、あるいは動くとしてもわずかであることが望ましいです。これらは背景のオブジェクトとして最適である。リジッドボディと衝突したとしても動きません。
リジッドボディコライダ
リジッドボディとコライダ双方を含むゲームオブジェクトについて考えます。これらのオブジェクトは物理エンジンに影響を受け、加えられた力や衝突によって軌道が変化します。またコライダを含むゲームオブジェクトと衝突させることが出来ます。多くの場合は物理エンジンを使用したい場合の主要なコライダとなります。
キネマティック リジッドボディコライダ
IsKinematicがオンとなっているリジッドボディとコライダ双方を含みゲームオブジェクトについて考えます。このオブジェクトを動かすためには力を加えるのではなくトランスフォーム コンポーネントの値を書き換えて移動させます。スタティックコライダを共通点が多いがコライダを頻繁に動かしたい場合に役立ちます。その他、このオブジェクトを使用するのが適切なシナリオはいくつか考えられます。
このオブジェクトはスタティックコライダにトリガーイベントを発行したい場合に役立つ。トリガーはリジッドボディを含む必要があるためリジッドボディをアタッチしたうえでIsKinematicをオンに設定する。これによりオブジェクトが物理挙動の影響を受けず、必要なときにトリガーイベントを受け取ることが出来るようになる。
キネマティック リジッドボディは簡単にオンオフを切り替えることが出来る。これはラグドール作成に大いに役立ち、たとえばキャクラターがある場面まではアニメーションどおりに動作し、その後に衝突によって爆発や何らかのエフェクトを起こしその後はラグドールの動作をさせたい場合に役立つ。
リジッドボディを長時間動かさない場合、完全にスリープ状態とさせることができる。言い換えると物理挙動のアップデート処理のなかで値が更新されることはなく、位置もそのままとなる。キネマティックリジッドボディコライダを通常のリジッドボディコライダの外に移動する場合、スリープ状態は解除され物理挙動のアップデート処理が再び始まる。つまり動かしたいスタティックコライダが複数あり、その上に異なるオブジェクトを落としたい場合にはキネマティック リジッドボディコライダを使用すべきである。
衝突アクションマトリクス
衝突する2つのオブジェクトの設定によっては同時に複数のアクションが走る可能性がある。以下のチャートにより二つのオブジェクトが衝突する際の動作をアタッチされているコンポーネント等を基準に整理しました。いくつかの組み合わせにおいては片方のオブジェクトのみ衝突の影響を受けるので、原則として「オブジェクトにリジッドボディがなければ物理挙動もない」ということをよく頭に入れておく必要がある。
| 衝突により衝突メッセージを受け取るか? | ||||||
| Static Collider | Rigidbody Collider | Kinematic Rigidbody Collider | Static Trigger Collider | Rigidbody Trigger Collider | Kinematic Rigidbody Trigger Collider | |
| Static Collider | Y | |||||
| Rigidbody Collider | Y | Y | Y | |||
| Kinematic Rigidbody Collider | Y | |||||
| Static Trigger Collider | ||||||
| Rigidbody Trigger Collider | ||||||
| Kinematic Rigidbody Trigger Collider | ||||||
| 衝突によりトリガーメッセージは発行されるか? | ||||||
| Static Collider | Rigidbody Collider | Kinematic Rigidbody Collider | Static Trigger Collider | Rigidbody Trigger Collider | Kinematic Rigidbody Trigger Collider | |
| Static Collider | Y | Y | ||||
| Rigidbody Collider | Y | Y | Y | |||
| Kinematic Rigidbody Collider | Y | Y | Y | |||
| Static Trigger Collider | Y | Y | Y | Y | ||
| Rigidbody Trigger Collider | Y | Y | Y | Y | Y | Y |
| Kinematic Rigidbody Trigger Collider | Y | Y | Y | Y | Y | Y |
Layer-Based Collision Detection
Unity3.XではLayer-Based Collision Detectionが機能として追加されましたので、すべてのレイヤーの組み合わせにおいて、どのレイヤーの組み合わせでは衝突が発生するかオブジェクトに設定を行うことができます。詳細情報についてはここ をクリックし参照のこと。
Page last updated: 2009-07-17class-PhysicMaterial
Physic Material を使用して、衝突するオブジェクトの摩擦や跳ね返り効果を調整できます。
物理特性マテリアルを作成するには、メニューバーから を選択します。 次に、シーン内の Collider にプロジェクト ビューから物理特性マテリアルをドラッグします。

物理特性マテリアル Inspector'
プロパティ
| Dynamic Friction | すでに移動中の場合に使用される摩擦。 通常は、0 から 1 の間の値を使用します。0 の場合、氷のような感じになります。1 の場合、多くの力または重力がオブジェクトを押さない限り、非常に素早く停止します。 |
| Static Friction | オブジェクトが面上で静止したままの場合に使用される摩擦。 通常は、0 から 1 の間の値を使用します。0 の場合、氷のような感じになります。1 の場合、オブジェクトをかなり強く動かそうとします。 |
| Bounciness | 面の跳ね返し度合い。 0 の場合、跳ね返りません。 1 の場合はエネルギー損失なしで跳ね返ります。 |
| Friction Combine Mode | 2 つの衝突するオブジェクトの摩擦がどのように結合されるか。 |
| Average | 2 つの摩擦値が平均化されます。 |
| Min | 2 つの摩擦値の内最小の値が使用されます。 |
| Max | 2 つの摩擦値の内最大の値が使用されます。 |
| Multiply | 2 つの摩擦値が互いに乗算されます。 |
| Bounce Combine | 2 つの衝突するオブジェクトの跳ね返し度合いがどのように結合されるか。 Friction Combine Mode と同じです。 |
| Friction Direction 2 | 異方性の方向。 この方向が 0 でない場合、異方性摩擦が有効になります。 Dynamic Friction 2 と Static Friction 2 Friction Direction 2に沿って適用されます。 |
| Dynamic Friction 2 | 異方性摩擦を有効にすると、DynamicFriction2 が、Friction Direction 2 に沿って適用されます。 |
| Static Friction 2 | 異方性摩擦を有効にすると、StaticFriction2 が、Friction Direction 2 に沿って適用されます。 |
詳細
摩擦は、面が互いに外れるのを防ぐ量です。 この値は、オブジェクトを重ねる時に重要です。 摩擦には動的と静的の 2 種類があります。 Static frictionは、オブジェクトが静止している際に使用されます。 これは、オブジェクトが動き始めるのを防ぎます。 強い力が加えられると、オブジェうとは動き始めます。 動き始めると、Dynamic Frictionが作用し始めます。 オブジェクトが別のオブジェクトと接触中に、Dynamic Frictionはオブジェクトを減速させようとします。
ヒント
- メイン キャラクターには、標準の物理特性マテリアルを使用しないでください。 カスタマイズしたマテリアルを使用して、より完璧なキャラクターを実現してください。
class-Rigidbody
Rigidbody により、 GameObject が物理特性の制御下で動作するようになります。 リジッドボディは、力やトルクを受け、現実的な方向にオブジェクトを動かすことができます。 GameObject は、重力の影響を影響を受けるリジッドボディを含めるか、スクリプティングを通じて加えた力の下で動作するか、NVIDIA PhysX 物理特性エンジンを通じて、その他のオブジェクトと相互作用する必要があります。

「リジッドボディにより、GameObject は物理的影響の下で動作できます」
プロパティ
| Mass | オブジェクトの質量 (単位 : kg)。 質量をその他のリジッドボディの 100 倍 にしておくことをお勧めします。 | |
| Drag | 力により動く際に、オブジェクトに影響する空気抵抗の量。 0 の場合、空気抵抗が 0 で、無限の場合、オブジェクトは直ちに動きを止めます。 | |
| Angular Drag | トルクにより回転する際に、オブジェクトに影響する空気抵抗の量。 0 の場合、空気抵抗が 0 で、無限の場合、オブジェクトは直ちに回転を止めます。 | |
| Use Gravity | 有効にすると、オブジェクトは重力の影響を受けます。 | |
| Is Kinematic | 有効にすると、オブジェクトは物理特性エンジンによって駆動されませんが、その Transform によってのみ操作できます。 これは、プラットフォームを移したい場合や、HingeJoint を追加したリジッドボディをアニメート化したい場合に便利です。 | |
| Interpolate | リジッドボディの移動でギクシャクした動きを求めている場合にのみこのオプションのいずれかを試します。 | |
| None | 補間は適用されません。 | |
| Interpolate | 前のフレームのトランスフォームに基づいて、トランスフォームを円滑にします。 | |
| Extrapolate | 次のフレームの推定トランスフォームに基づいて、トランスフォームを円滑にします。 | |
| Freeze Rotation | 有効にすると、GameObject はスクリプトを通じて追加される衝突または力に基づいて回転しません。「transform.Rotate()」を使用した場合のみ回転します。 | |
| Collision Detection | 高速で移動するオブジェクトが、衝突を検出せずに、他のオブジェクトを通過させないようにする場合に使用します。 | |
| Discrete | シーン内のその他すべてのコライダに対して、個別の衝突検出を使用します。 その他のコライダは、衝突のテスト時に個別衝突検出を使用します。 通常の衝突に使用されます (これはデフォルト値です)。 | |
| Continuous | 動的衝突 (リジッドボディとの) に対する個別衝突検出と、スタティックな MeshColliders との連続衝突検出 (リジッドボディなし) を使用します。 Rigidbodies set to Continuous Dynamic に設定されたリジッドボディは、このリジッドボディへの衝突をテストする際に、連続衝突検出を使用します。 その他のリジッドボディは、個別衝突検出を使用します。 連続衝突検出が衝突を必要とするオブジェクトに使用されます。 (高速のオブジェクトの衝突に関して問題がない場合は、これは、物理特性パフォーマンスに大きく影響するため、個別に設定しておきます) | |
| Continuous Dynamic | 連続および連続動的衝突に設定されたオブジェクトに対して、連続衝突検出を使用します。 スタティックな MeshColliders との連続衝突検出も使用します (リジッドボディなし)。 その他すべてのコライダに対しては、個別衝突検出を使用します。 高速移動するオブジェクトに使用されます。 | |
| Constraints | リジッドボディの動きに関する制限:- | |
| Freeze Position | 是界の X、Y、Z 軸で移動するリジッドボディを選択的に停止します。 | |
| Freeze Rotation | 是界の X、Y、Z 軸で回転するリジッドボディを選択的に停止します。 |
詳細
Rigidbody により、GameObject が物理特性エンジンの制御下で動作するようになります。 これにより、現実的な衝突、多様な種類のジョイント、その他のクールな動作へのゲートウェイが開きます。 リジッドボディに力を加えることで、GameObject を操作することによって、 Component を直接調整した場合とは違うルック & フィールを作成します。 一般に、リジッドボディと同じ GameObject のトランスフォームのどちらか一方だけを操作しないでください。
トランスフォームの操作とリジッドボディ間の最大の差は、力を使用するかしないかです。 リジッドボディは、力やトルクを受けることができますが、トランスフォームはできません。 トランスフォームは移動や回転はできますが、物理特性の使用とは異なります。 自分で試した場合は、その顕著な差に気づくでしょう。 リジッドボディに力/トルクを加えると、実際にオブジェクトの位置やトランスフォーム コンポーネントの回転を変更します。 このため、どちら一方だけを使用する必要があります。 物理特性使用中にトランスフォームを変更すると、衝突やその他の計算に問題が生じる場合があります。
リジッドボディは、物理特性エンジンに影響を受ける前に、GameObject に明示的に追加する必要があります。 メニューバーで「Components->Physics->Rigidbody」から選択したオブジェクトにリジッドボディを追加できます。 オブジェクトで物理特性の準備ができました。重力下に置かれ、スクリプティングを介して、力を受けることができますが、 Collider またはジョイントを追加して、正確に希望通りに動作させる必要があります。
親子関係
オブジェクトが物理特性の制御下にある場合、トランスフォームの親が移動する方法から半分独立して移動します。 親を移動すると、リジッドボディの子をそれに沿って引っ張ります。 しかし、リジッドボディは重力および衝突イベントへの対応により、落下していきます。
スクリプティング
リジッドボディをコントロールするため、最初にスクリプトを使用して、力またはトルクを追加します。 「AddForce() と「AddTorque() 」をオブジェクトのリジッドボディで呼び出すことでこれを行います。 物理特性を使用する際は、オブジェクトのトランスフォームを直接買えないようにしてください。
アニメーション
一部の状況で、主にラグドール効果を作成する場合に、アニメーションと物理特性間でオブジェクトのコントロールを切り替える必要があります。 このため、リジッドボディには、「isKinematic 」と付けることができます。 リジッドボディに「isKinematic」と付いている場合、衝突や力、physX のその他の部分の影響を受けません。 This means that you will have to control the object by manipulating the Transform コンポーネントを直接操作することで、オブジェクトをコントロールする必要があるということです。 キネマティック リジッドボディはその他のオブジェクトに影響しますが、これら自体は物理特性の影響を受けません。 例えば、キネマティック オブジェクトに追加されるジョイントは、そこに追加されたその他のリジッドボディを制約し、キネマティック リジッドボディは衝突を通じて、その他のリジッドボディに影響します。
コライダ
コライダは、衝突を発生させるために、リジッドボディと共に追加する必要のある別の種類のコンポーネントです。 2 つのリジッドボディが互いに衝突する場合、物理特性エンジンは、両方のオブジェクトもコライダを追加するまで、衝突を計算しません。 コライダのないリジッドボディは、物理特性シミュレーション中に互いを簡単に通過します。

「コライダはリジッドボディの物理特性の境界を定義します」
「Component->Physics」メニューでコライダを追加します。 詳細については、個々のコライダのコンポーネント リファレンス ページを参照してください。
- Box Collider - キューブのプリミティブの形状
- Sphere Collider - 球体のプリミティブの形状
- Capsule Collider - カプセルのプリミティブの形状
- Mesh Collider - オブジェクトのメッシュからコライダを作成し、別のメッシュ コライダとは衝突できません
- Wheel Collider - 車両またはその他の移動する乗り物の作成用
複合コライダ
複合コライダはプリミティブなコライダの組み合わせによりひとつのコライダとしての挙動を示すものです。便利な場面としては複雑なメッシュをコライダでしようしたいが、Mesh Colliderを使用できないケースです。複合コライダを作成する際はコライダオブジェクトの子オブジェクトを作成し、それから各々の子オブジェクトにプリミティブなコライダを追加します。これにより各々のコライダを別々に容易に配置、回転、拡大縮小することが出来ます。

リアルリティのある複合コライダ
上記の図では、ガンモデルのゲームオブジェクトはリジッドボディがアタッチされてあり、子オブジェクトして複数のプリミティブなコライダを含みます。親のリジッドボディが力により動かされた場合、子コライダが追従して動きます。プリミティブなコライダは環境上にあるMesh Colliderと衝突し、親リジッドボディは自身に加えられた力の作用、子コライダがシーン上他のコライダとの衝突した作用、の双方を加味して軌道が変化します。
Mesh Collider同士は通常では衝突しませんが、Convexをオンにした場合のみ衝突することが出来ます。良くある方法として、動く全てのオブジェクトにはプリミティブなコライダを組み合わせ、動かない背景のオブジェクトにMesh Colliderを使います。
連続衝突検出
連続衝突検出は、高速移動するコライダが互いに通過しないようにする機能です。 これは、通常の(「Discrete」) 衝突検出使用時、オブジェクトが 1 つのフレームでコライダの片側にあり、次のフレームでコライダを通過している場合に発生することがあります。 これを解決するには、高速移動するオブジェクトのリジッドボディで連続衝突検出を有効にできます。 衝突検出モードを「Continuous」に切り替え、リジッドボディがスタティックな (つまり、非リジッドボディ) MeshColliders を通過させないようにします。 衝突検出モードを「Continuous Dynamic」に切り替え、リジッドボディが、衝突検出モードを「Continuous」または「Continuous Dynamic」に設定したその他のサポートされているリジッドボディを通過させないようにします。 連続衝突検出は、Box-、Sphere- および CapsuleCollider でサポートされています。
正しいサイズの使用
GameObject のメッシュのサイズは、リジッドボディの質量よりもはるかに重要です。 リジッドボディが期待通りに動作しなていない場合、ゆっくり移動するか、浮くか、正しく衝突しません。 Unity のデフォルトの単位スケールは、1 単位 = 1 メートルなので、インポートされたメッシュのスケールは維持され、物理特性計算に適用されます。 例えば、倒壊しかけている高層ビルは、積み木で作った塔とはかなり違う形で崩れるため、正確にスケールするには、サイズの異なるオブジェクトをモデル化する必要があります。
人間をモデリングしている場合、Unity では、その人間の身長は約 2メートルになります。 オブジェクトが正しいサイズかどうかを確認するには、デフォルトのキューブと比較します。 を使用して、キューブを新規作成します。 キューブの高さは、ちょうど 1 メートルになるため、作成している人間は 2 倍の慎重になります。
メッシュ自体を調整できない場合、Project View で選択し、メニューバーから を選択することで、特定のメッシュ アセットの均一なスケールを変更できます。 ここでは、スケールを変更し、メッシュを再インポートできます。
ゲームで、GameObject を異なるスケールでインスタンス化する必要がある場合、トランスフォームのスケール軸の値を調整しても大丈夫です。 欠点は、物理特性シミュレーションは、オブジェクトのインスタンス化時に更に多くの作業を剃る必要があり、ゲーム内でパフォーマンスの低下を引き起こす可能性があります。 これは大きな損失ではありませんが、他の 2 つのオプションでスケールを仕上げることほど効率的ではありません。 不均一なスケールだと、パレンディング使用時に望まぬ動作を生じる場合があります。 このため、モデリング アプリケーションで正しいスケールでオブジェクトを作成するのが常に最適です。
ヒント
- 2 つのリジッドボディの相対的な「Mass」は、リジッドボディが互いに衝突する際に、どのように反応するかを決定します。
- 1 つのリジッドボディの「Mass」を他方より大きくしても、自由落下での落下速度は上がりません。 これを行うには、「Drag」を使用します。
- 「Drag」値が低いと、オブジェクトが重く見えるようになります。 この値が高いと、軽く見えます。 「Drag」の通常の値は、.001 (金属の塊) と 10 (羽) の間です。
- オブジェクトのトランスフォーム コンポーネントを直接操作しているが、物理特性が必要な場合、リジッドボディを追加し、キネマティックにします。
- トランスフォーム コンポーネントを通じて、GameObject を移動させているが、衝突/トリガー メッセージを受信したい場合は、リジッドボディを移動しているオブジェクトに追加する必要があります。
class-SphereCollider
Sphere Collider は、基本的な球体形状の衝突プリミティブです。

球体コライダの積み重ね
プロパティ
| Material | 使用する物理マテリアル への参照。物理マテリアルによりコライダが他と衝突したときの物理挙動の条件が定義されます |
| Is Trigger | オンにすると、コライダはイベントのトリガーとなり、物理エンジンが無視されます |
| Radius | コライダのサイズ。 |
| Center | オブジェクトのローカル空間でのコライダの位置。 |
詳細
球体コライダは、均一なスケールにサイズ変更できますが、個々の軸に沿ってサイズ変更はされません。 落下する医師や、卓球の球、ビー玉などに最適です。

標準の球体コライダ
コライダはリジッドボディ(剛体)と連動してUnity上での物理挙動を実現します。リジッドボディというのは「オブジェクトを物理法則にしたがって動かす」一方で、コライダはオブジェクトが互いに衝突することを可能に」します。コライダはリッジトボディとは別にオブジェクトにアタッチする必要があります。コライダとリジットボディは両方アタッチされなくとも良いですが、コリジョンの結果オブジェクトを動かすためにはリジッドボディがアタッチされていることは必須です。
二つのコライダ間で衝突が発生し、かつ少なくとも一つのオブジェクトにリジッドボディがアタッチされている場合、 3種類の 衝突 メッセージが 発行されます。これらのイベントはスクリプト作成時にハンドリングすることが出来ますし、独自のビヘイビアを定義することが出来るようになるとともにビルトインのNVIDIA PhysXエンジンを使用するかしないか自由に決めることができます。
トリガー
コライダをトリガーとしてマーキングする別の方法としてインスペクタでIsTrigger属性をチェックボックスにてオンするという方法があります。これによりトリガーは物理エンジンから無視されるようになり、コライダの衝突が発生した場合3種類の トリガー メッセージが 発行されます。トリガーはゲームにおける別イベントを発動するに際して便利です(カットシーンやドアの自動開閉、チュートリアルメッセージの表示、等々。想像を働かせれば様々なパターンで応用できます)
留意しないといけないこととして、二つのトリガーがトリガーイベント衝突時にを発行するためには、片方のトリガーはリジッドボディを含む必要があります。同様にトリガーが通常のコライダと衝突するためには片方のトリガーがリジッドボディを含む必要があります。全てのコライダ種類の一覧についていは図表にまとめてありますので、本項の「応用」セクションの「衝突アクションマトリクス」を参照下さい。
摩擦係数と反射係数
摩擦係数(friction)、反射係数(bouncyness)、やわらかさ(softness)は 物理マテリアル の属性として定義されています。スタンダードアセット のなかに頻繁に使用される物理マテリアルが含まれています。使用する際はPhysic Materialドロップダウンボックスをクリックし、どれかを選択します(例 Ice)。物理マテリアル については自分でカスタムのものを作成することができ、摩擦係数などすべて調整することが出来ます。
合成物コライダ
複合コライダはプリミティブなコライダの組み合わせによりひとつのコライダとしての挙動を示すものです。便利な場面としては複雑なメッシュをコライダでしようしたいが、Mesh Colliderを使用できないケースです。複合コライダを作成する際はコライダオブジェクトの子オブジェクトを作成し、それから各々の子オブジェクトにプリミティブなコライダを追加します。これにより各々のコライダを別々に容易に配置、回転、拡大縮小することが出来ます。

リアルリティのある複合コライダ
上記の図では、ガンモデルのゲームオブジェクトはリジッドボディがアタッチされてあり、子オブジェクトして複数のプリミティブなコライダを含みます。親のリジッドボディが力により動かされた場合、子コライダが追従して動きます。プリミティブなコライダは環境上にあるMesh Colliderと衝突し、親リジッドボディは自身に加えられた力の作用、子コライダがシーン上他のコライダとの衝突した作用、の双方を加味して軌道が変化します。
Mesh Collider同士は通常では衝突しませんが、Convexをオンにした場合のみ衝突することが出来ます。良くある方法として、動く全てのオブジェクトにはプリミティブなコライダを組み合わせ、動かない背景のオブジェクトにMesh Colliderを使います。
ヒント
- 複数のコライダをオブジェクトに追加するためには、子ゲームオブジェクトを作成し、コライダを各々にアタッチします。この方法でコライダは独立して動作させることが出来ます。
- シーンビューのギズモをみることでオブジェクトとコライダの大きさを確かめることができます
- コライダはオブジェクトと大きさを出来るかぎりマッチさせるのが良いです。均等でない大きさ(各々の軸方向での大きさが異なる)の場合、メッシュコライダでないかぎりオブジェクトと完全にマッチさせることは出来ません
- トランスフォームコンポーネントを通してオブジェクトを移動させつつ、衝突、トリガーのメッセージを受け取るためには動いているオブジェクトにリジッドボディをアタッチする必要があります。
- 爆発を作成したい場合、リジッドボディが衝突する壁から若干押し出すためには、何度もドラッグして、リジッドボディと球体コライダをそこに追加すると非常に有効な場合があります。
応用
コライダの組み合わせ
Unity上で複数の異なる衝突の組み合わせがありえる。それぞれのゲームで追求する内容は異なるので、ゲームのタイプによって組み合わせの良し悪しが決まってくる。ゲームで物理挙動を使用している場合、基本的なコライダの種類を理解し、主な働き、使用例、他のオブジェクトとの相互作用について理解を深める必要がある。
スタティックコライダ
リジッドボディを含まないが、コライダを含むゲームオブジェクトについて考えます。これらのオブジェクトは動かない、あるいは動くとしてもわずかであることが望ましいです。これらは背景のオブジェクトとして最適である。リジッドボディと衝突したとしても動きません。
リジッドボディコライダ
リジッドボディとコライダ双方を含むゲームオブジェクトについて考えます。これらのオブジェクトは物理エンジンに影響を受け、加えられた力や衝突によって軌道が変化します。またコライダを含むゲームオブジェクトと衝突させることが出来ます。多くの場合は物理エンジンを使用したい場合の主要なコライダとなります。
キネマティック リジッドボディコライダ
IsKinematicがオンとなっているリジッドボディとコライダ双方を含みゲームオブジェクトについて考えます。このオブジェクトを動かすためには力を加えるのではなくトランスフォーム コンポーネントの値を書き換えて移動させます。スタティックコライダを共通点が多いがコライダを頻繁に動かしたい場合に役立ちます。その他、このオブジェクトを使用するのが適切なシナリオはいくつか考えられます。
このオブジェクトはスタティックコライダにトリガーイベントを発行したい場合に役立つ。トリガーはリジッドボディを含む必要があるためリジッドボディをアタッチしたうえでIsKinematicをオンに設定する。これによりオブジェクトが物理挙動の影響を受けず、必要なときにトリガーイベントを受け取ることが出来るようになる。
キネマティック リジッドボディは簡単にオンオフを切り替えることが出来る。これはラグドール作成に大いに役立ち、たとえばキャクラターがある場面まではアニメーションどおりに動作し、その後に衝突によって爆発や何らかのエフェクトを起こしその後はラグドールの動作をさせたい場合に役立つ。
リジッドボディを長時間動かさない場合、完全にスリープ状態とさせることができる。言い換えると物理挙動のアップデート処理のなかで値が更新されることはなく、位置もそのままとなる。キネマティックリジッドボディコライダを通常のリジッドボディコライダの外に移動する場合、スリープ状態は解除され物理挙動のアップデート処理が再び始まる。つまり動かしたいスタティックコライダが複数あり、その上に異なるオブジェクトを落としたい場合にはキネマティック リジッドボディコライダを使用すべきである。
衝突アクションマトリクス
衝突する2つのオブジェクトの設定によっては同時に複数のアクションが走る可能性がある。以下のチャートにより二つのオブジェクトが衝突する際の動作をアタッチされているコンポーネント等を基準に整理しました。いくつかの組み合わせにおいては片方のオブジェクトのみ衝突の影響を受けるので、原則として「オブジェクトにリジッドボディがなければ物理挙動もない」ということをよく頭に入れておく必要がある。
| 衝突により衝突メッセージを受け取るか? | ||||||
| Static Collider | Rigidbody Collider | Kinematic Rigidbody Collider | Static Trigger Collider | Rigidbody Trigger Collider | Kinematic Rigidbody Trigger Collider | |
| Static Collider | Y | |||||
| Rigidbody Collider | Y | Y | Y | |||
| Kinematic Rigidbody Collider | Y | |||||
| Static Trigger Collider | ||||||
| Rigidbody Trigger Collider | ||||||
| Kinematic Rigidbody Trigger Collider | ||||||
| 衝突によりトリガーメッセージは発行されるか? | ||||||
| Static Collider | Rigidbody Collider | Kinematic Rigidbody Collider | Static Trigger Collider | Rigidbody Trigger Collider | Kinematic Rigidbody Trigger Collider | |
| Static Collider | Y | Y | ||||
| Rigidbody Collider | Y | Y | Y | |||
| Kinematic Rigidbody Collider | Y | Y | Y | |||
| Static Trigger Collider | Y | Y | Y | Y | ||
| Rigidbody Trigger Collider | Y | Y | Y | Y | Y | Y |
| Kinematic Rigidbody Trigger Collider | Y | Y | Y | Y | Y | Y |
Layer-Based Collision Detection
Unity3.XではLayer-Based Collision Detectionが機能として追加されましたので、すべてのレイヤーの組み合わせにおいて、どのレイヤーの組み合わせでは衝突が発生するかオブジェクトに設定を行うことができます。詳細情報についてはここ をクリックし参照のこと。
Page last updated: 2012-11-13class-SpringJoint
Spring Joint は、2 つの Rigidbody をグループ化し、スプリングで連結されているかのように動くよう制約します。

スプリング ジョイント Inspector
プロパティ
| Connected Body | ジョイントが依存する剛体へのオプションのオプションの参照。 |
| Anchor | ジョイントの中心を定義するオブジェクトのローカル空間での位置 (静止時)。 これは、オブジェクトがびょうがされる先の点ではありません。 |
| X | X 軸に沿ったジョイントのローカル点の位置。 |
| Y | Y 軸に沿ったジョイントのローカル点の位置。 |
| Z | Z 軸に沿ったジョイントのローカル点の位置。 |
| Spring | スプリングの強さ。 |
| Damper | 有効な場合にスプリングを減らす量。 |
| Min Distance | この距離を超えると、スプリングが起動しません。 |
| Max Distance | この距離を未満の場合、スプリングが起動しません。 |
| Break Force | このジョイントが分解するのに適用される必要のある力。 |
| Break Torque | このジョイントが分解するのに適用される必要のあるトルク。 |
詳細
スプリング ジョイントにより、剛体 GameObject は特定の目標:位置に引っ張られます。 この位置は、別のリジッド ボディの GameObject か世界のいずれかになります。 GameObject がこの目標位置から更に離れると、スプリング ジョイントがその元の目標''位置に引き戻す力を加えます。 これにより、ボムバンドやパチンコに非常に似た効果を作成できます。
スプリングの目標位置は、スプリング ジョイントを作成するか、再生モードに入った時に、AnchorからConnected Body(または世界)までの相対位置によって決定されます。 これにより、ジョイントしたキャラクターまたはオブジェクトをエディタで設定する際ににスプリング ジョイントが効率的になりますが、スプリングのプッシュ / プル 動作をスクリプティングを通じて、ランタイムで作成するのがより難しくなります。 スプリング ジョイントを使用して、主に GameObject の位置を変更したい場合、リジッドボディで空の GameObject を作成し、それをジョイントされたオブジェクトのConnected Rigidbodyに設定します。 次にスクリプティングで、Connected Rigidbodyの位置を変更でき、期待通りにスプリングが移動します。
Connected Rigidbody
- ジョイントを起動させるのに、Connected Bodyがを使用する必要はありません。 一般に、オブジェクトの位置および / または回転が依存していない場合のみに使用する必要があります。 Connected Rigidbodyがない場合は、スプリングはワールドに接続します。
スプリングとダンパー
スプリングは、オブジェクトをその目的の位置に引っ張り戻す力の強さです。 0 の場合、オブジェクトに引っ張る力はかからず、スプリング ジョイントがかからないかのよう動作します。
Damperは、Springの力に対する抵抗力です。 この値が低いほど、オブジェクトのスプリング力は強くなります。 Damperが増えると、ジョイントによる跳ね返りの量は減ります。
Min & Max Distance
オブジェクトの位置がMinとMax Distances間にある場合、ジョイントはオブジェクトに適用されません。 この位置を、有効にするジョイントに対して、これらの値外に移動させる必要があります。
ヒント
- ジョイントが機能するよう、Connected Bodyをジョイントに割り当てる必要はありません。
- 再生モードに入る前に、エディタでジョイント オブジェクトの理想的な位置を設定します。
- スプリング ジョイントは、リジッドボディを追加する必要があります。
class-InteractiveCloth
インタラクティブ クロスは、メッシュでの「布のような」動作をシミュレートするコンポーネントです。 シーンでクロスを使用したい場合にこのコンポーネントを使用します。

「シーン ビューでのインタラクティブ クロスとインスペクタでのそのプロパティ」
プロパティ
インタラクティブ クロス
| Bending Stiffness | クロスの曲げ剛性。 |
| Bending Stiffness | クロスの伸張剛性。 |
| Damping | 濡れたクロスの動き。 |
| Use Gravity | クロスのシミュレーションに重力を影響させるかどうか。 |
| External Acceleration | クロスに適用される一定の、外部加速度。 |
| Random Acceleration | クロスに適用されるランダムの、外部加速度。 |
| Mesh | シミュレーションに対してインタラクティブ クロスが使用するメッシュ。 |
| Thickness | クロス表面の厚さ。 |
| Friction | クロスの摩擦。 |
| Density | クロスの密度。. |
| Pressure | Tクロス内部の圧力。 |
| Collision Response | 衝突するリジッドボディに適用される力の量。 |
| Attachment Tear Factor | 追加されたリジッドボディを裂くのに必要な伸ばす距離。 |
| Attachment Response | 追加されたリジッドボディに適用される力の量。 |
| Tear Factor | クロスを裂くのに必要なクロスの頂点を伸ばす距離。 |
| Attached Colliders | このクロスに対して追加されたコライダを含む配列。 |
| Self Collision | クロスがそれ自身と衝突するかどうか。 |
インタラクティブ クロス コンポーネントは、クロス レンダラ コンポーネントに依存します。つまり、このコンポーネントは、クロス レンダラがゲーム オブジェクトに表示されている場合は、削除できないということです。
クロス レンダラ
| Cast Shadows | 有効にすると、クロスが影を投影します。 |
| Receive Shadows | 有効にすると、クロスは影を受けることができます。 |
| Lightmap Index | このレンダラに適用されるライトマップの索引。 |
| Lightmap Tiling Offset |ライトマップに使用されるタイリングとオフセット。 | |
| Materials | クロスが使用するマテリアル。 |
| Pause When Not Visible | 選択すると、クロスがカメラでレンダリングされない場合に、シミュレーションが計算されません。 |
インタラクティブ クロス ゲーム オブジェクトの追加
シーンにインタラクティブ クロスを追加するには、「GameObject->Create Other->Cloth」を選択します。
ヒント
- ゲームで多くのクロスを使用すると、指数関数的にゲームのパフォーマンスが下がります。
- キャラクターでクローシングをシミュレートしたい場合は、代わりに、 SkinnedMeshRenderer と相互に働き、InteractiveClotgh よりもはるかに速い Skinned Cloth コンポーネントを確認します。
- その他のオブジェクトにクロスを追加するには、「Attached Colliders」プロパティを使用して、追加するその他のオブジェクトを割り当てます。 これを行うには、コライダがクロス メッシュの一部の頂点を重ねる必要があります。
- Attached Colliders のオブジェクトは、追加しているクロスと交差する必要があります。
備考
- クロス シミュレーションにより法線マップは生成されるが接線は生成しない。メッシュに接線がある場合、シェーダに変更なしで渡される。つまり接線に依存したシェーダ(バンプマップシェーダなど)を使用する場合、初期位置から移動したクロスに対するライティングは誤っているように見える。
class-SkinnedCloth

「シーン ビューとインスペクタでのスキン クロス」
SkinnedCloth コンポーネントは、SkinnedMeshRenderer と連携して、キャラクター上でクレーシングをシミュレートします。 SkinnedMeshRenderer を使用するアニメート化されたキャラクターがある場合、SkinnedMeshRenderer で、SkinnedCloth コンポーネントをゲーム オブジェクトに追加して、キャラクターをより現実的に見せることができます。 SkinnedMeshRender で、GameObject を選択し、 を使用して、SkinnedCloth コンポーネントを追加します。
SkinnedCloth コンポーネントは、SkinnedMeshRenderer からの頂点の出力を受け取り、そこにクローシング シミュレーションを適用します。 SkinnedCloth コンポーネントには、頂点ごとの係数のセットがあり、シミュレートされたクロスが、スキン メッシュに関連してどれだけ自由に移動できるかを定義します。
これらの係数は、SkinnedCloth コンポーネントのあるゲーム オブジェクトを選択すると、シーン エディタとインスペクタを使用して、視覚的に編集できます。 編集モードには、選択モードと頂点描画モードの 2 種類があります。 選択モードでは、シーン ビューで頂点をクリックして、選択し、インスペクタでその係数を編集します。 頂点描画モードでは、インスペクタで必要な係数値を設定し、変更したい係数の隣にある「描画」ボタンを有効にして、頂点をクリックして、値を頂点に適用します。
スキン クロスのシミュレーションは、SkinnedMeshRenderer でスキンされた頂点によってのみ駆動され、そうでない場合は、コライダと相互作用しません。 これにより、物理特性シミュレーションの残りと同じフレーム レートと同じスレッドでシミュサーとする必要がないため、スキン クロスのシミュレーションは、完全に物理的な Interactive Cloth コンポーネントより高速になります。
スキン クロス コンポーネントを無効または有効にし、いつでもオンまたはオフにできます。 オフにすることで、レンダリングを通常の SkinnedMeshRenderer に切り替わるので、パフォーマンスを変えるのに動的に変える必要がある場合はいつでもこれらを切り替えることができます。 SkinnedCloth.SetEnabledFading() メソッドを使用して、スクリプトから 2 つのモード間で円滑にクロス フェードすることもできます。
クロス シミュレーションにより法線マップは生成されますが接線は生成しません。メッシュに接線がある場合、シェーダに変更なしで渡されます。つまり接線に依存したシェーダ(バンプマップシェーダなど)を使用する場合、初期位置から移動したクロスに対するライティングは誤っているように見えます。
クロス係数
頂点ごとに 4 つの係数があり、クロスの頂点が、スキンされた頂点および法線に対して、クロス頂点がどのように動くかを定義します。 以下の 4 つの係数があります。
| Max Distance | 頂点がスキン メッシュ頂点位置から移動できる距離。 SkinnedCloth コンポーネントにより、クロス頂点は、スキン メッシュ頂点位置から maxDistance 内に留まります。 maxDistance が 0 の場合、頂点はシミュレートされませんが、スキン メッシュに設定されます。 この動作は、クロス頂点をアニメート化キャラクターに固定、つまり、スキンさせない頂点や、キャラクターの体 (ベルトで固定されたズボンの腰) に固定されるパーツに対してこれを行いたい場合に便利です。 しかし、クロス シミュレーション (顔や手など) を使用しないキャラクターの大きなパーツがある場合、最高のパフォーマンスを得るには、これらを SkinnedCloth コンポーネントを持たない別個のスキン メッシュとして設定します。 |
| Distance Bias | スキン メッシュの法線に基づき、maxDistance で定義される球体を歪めます。 この機能は、値を 0.0 (デフォルト) にすると無効になります。 この場合、maxDistance 球体は歪められません。 maxDistanceBias を -1.0 まで下げると、頂点が接線方向に移動できる距離が減ります。 -1.0 the の場合、スキン メッシュ頂点位置を通じて、スキン メッシュ頂点位置までの maxDistance 内で法線上に頂点が留まる必要があります。 maxDistanceBias を 1.0 まで上げると、頂点が法線方向に移動できる距離が減ります。 1.0 の場合、頂点は、スキン メッシュ頂点位置からthe vertex can only move inside the tangental plane within maxDistance 内で接平面内で頂点が移動できます。 |
| Collision Sphere Radius and Collision Sphere Distance | 入力できない球体の定義。 これにより、アニメート化されたクロスへの衝突が可能になります。 このペア (collisionSphereRadius、collisionSphereDistance) により、各クロス頂点に球体が定義されます。 球体の中心は、位置 constrainPosition - constrainNormal * (collisionSphereRadius + collisionSphereDistance) にあり、その半径は、collisionSphereRadius で、ここでは constrainPosition と constrainNormal は、SkinnedMeshRenderer で生成された頂点位置と法線になります。 SkinnedCloth は、クロスの頂点がこの球体に入らないようにします。 つまり、collisionSphereDistance は、スキン メッシュがクロスによってどれだけ深く浸透するかを定義します。 これは、通常 0 に設定されます。collisionSphereRadius は、クロスの頂点が衝突球体周辺でスリップできるよう、隣接する頂点間の距離よりも大きい値に設定する必要があります。 このような設定では、クロスはスキン メッシュに対する衝突に対して表示されます。 |
maxDistanceBias の異なる値に対するスキンされた頂点と法線に関連してこれらの係数がどのように機能するかに関する視覚的表示に関しては、この画像を参照してください。 赤いエリアは、collisionSphereRadius と collisionSphereDistance によって定義される衝突球体になります。ここには、クロスの頂点は入れません。 従って、赤いエリアで差し引かれる maxDistance と maxDistanceBias で定義される緑のエリアをクロス頂点が移動できる空間を定義します。

SkinnedCloth インスペクタ
SkinnedCloth のある GameObject を選択すると、SkinnedCloth インスペクタを使用して、クロス頂点係数およびその他のプロパティを編集できます。 インスペクタには、次の 3 つのタブがあります。
頂点選択ツール

このモードでは、シーン ビューで頂点を選択し、インスペクタで係数を設定します (クロス係数の機能の説明については、前項を参照)。 [Shift] キーを押すか、マウスで長方形をドラッグすることで、複数の係数を設定でいます。 複数の頂点を選択すると、インスペクタは頂点係数の平均値を表示します。 しかし、値を変更すると、その係数はすべての頂点に対して、同じ値に設定されます。 シーン ビューをワイヤーフレーム モードに切り替えると、後ろ向きの頂点を確認および選択できます。これは、キャラクターの完全な部分を選択したい場合に便利です。
この係数がすべての頂点に対してどの値を持っているかを理解するには、係数フィールドの目アイコンをクリックして、エディタにシーン ビューでその係数を表示させることができます。 これによって、その係数の最も低い値のある頂点は緑色で、中距離の値は黄色、最高値は青で表示されます。 カラー スケールは常にその係数の使用される値の範囲に関連して選択され、絶対値からは独立しています。
頂点ペイント ツール

頂点選択に似ていますが、これは、頂点係数値を設定するためのツールです。 頂点選択とは違い、色を変更する前に頂点をクリックする必要はありません。このモードでは、設定したい値を入力し、変更したい係数の隣にあるペンキ ブラシ トグルを有効にして、その値を設定したいすべての頂点をクリックします。
設定

3 つ目のタブにより、スキン クロスの各種プロパティを設定できます。
| Bending Stiffness | クロスの曲げ剛性。 |
| Stretching Stiffness | クロスの伸張剛性。 |
| Damping | 濡れたクロスの動き。 |
| Use Gravity | クロスのシミュレーションに重力を影響させるかどうか。 |
| External Acceleration | クロスに適用される一定の外部加速度。 |
| Random Acceleration | クロスに適用されるランダムの外部加速度。 |
| World Velocity Scale | クロスの頂点に影響させるキャラクターのワールド座標系での移動量。 この値が高いほど、GameObject のワールド座標系の移動への対応として、クロスの移動量が増えます。 基本的に、これは、SkinnedCloth の空気摩擦を定義します。 |
| World Acceleration Scale | クロスの頂点に影響させるキャラクターのワールド座標系での加速度。 この値が高いほど、GameObject のワールド座標系の加速への対応として、クロスの移動量が増えます。 クロスの動きが活発でないように見える場合、この値を上げてみてください。 キャラクターが加速した際に、安定していないように見える場合、この値を減らしてみてください。 |
class-WheelCollider
Wheel Collider は、陸上車両用の特殊なコライダです。 組み込み衝突検出、ホイール物理特性、スリップ ベースのタイヤ摩擦モデルを含みます。 ホイール以外のオブジェクトにも使用できますが、特にホイールのある車両向けに設計されています。

ホイール コライダ コンポーネント。 ATI Tchnologies 社により無料提供された車両モデル。
プロパティ
| Center | オブジェクトのローカル空間でのホイールの中心。 |
| Radius | ホイールの半径。 |
| Suspension Distance | ホイール サスペンションの最大延長距離で、ローカル空間で測定されます。 サスペンションは常にローカルな Y 軸を通じて下に伸びます。 |
| Suspension Spring | サスペンションが、スプリングと制動力を追加して、Target Positionに達しようとします。 |
| Spring | スプリング力がTarget Positionに到達しようとします。 値が大きいほど、サスペンションがTarget Positionに到達する速度が上がります。 |
| Damper | サスペンションの速度を制動します。 値が小さいほどSuspension Springの速度が下がります。 |
| Target Position |Suspension Distance に沿ったサスペンションの残りの距離。 0 は、完全に伸びきったサスペンションをマッピングし、1 は完全に縮まったサスペンションをマッピングします。 デフォルトは 0 で、通常の車両のサスペンションの動作に一致します。 | |
| Mass | ホイールの質量。 |
| Forward/Sideways Friction | ホイールが前転または横転する際のタイヤの摩擦のプロパティ。 下記の Wheel Friction Curves を参照してください。 |
詳細
ホイールの衝突検出は、ローカルな Y 軸を通じて、Centerから下に光線を放つことで実行されます。 ホイールには、Radiusがあり、Suspension Distanceに応じて、下に延長できます。 車両は、次の各種プロパティを使用して、スクリプティングから制御されます。 motorTorque、brakeTorqueとsteerAngle。 詳細については、Wheel Collider scripting reference を参照してください。
ホイール コライダは、スリップ ベースの摩擦モデルを用いて、物理特性エンジンの残りとは別に摩擦を計算します。 これにより、より現実的な動作が可能になりますが、標準の Physic Material 設定を無視するようになります。
ホイール コライダの設定
ホイール コライダ オブジェクトを転動または回転させて、車両を制御しません。ホイール コライダが追加されたオブジェクトは常に車両自体に関連して固定される必要があります。 しかし、ホイールのグラフィック表示を回転および転動させたい場合があります。 これを行う最良の方法は、ホイール コライダと表示されているホイールに対して個々のオブジェクトを設定することです。

ホイール コライダは、表示されているホイール モデルから切り離されます
衝突ジオメトリ
車両は高い速度を達成できるため、レース トラックでの衝突ジオメトリを正しく設定することは非常に重要です。 特に、collision mesh、表示されているモデル (フェンスのポールなど) を構成する小さい凹凸があってはいけません。 レース トラックに対する衝突メッシュは通常、表示メッシュとは独立して生成され、衝突メッシュをできるだけ滑らかにします。 また、薄いオブジェクトがあってもいけません。トラックの境界が薄い場合、衝突メッシュでそれを広くします (あるいは、車両がそこにいけない場合は、反対側を完全に除去します)。

表示ジオメトリ (左) は衝突ジオメトリ (右) よりもはるかに複雑になります。
Wheel Friction Curves
摩擦は、下記のWheel Friction Curveによって説明できます。 ホイールの前進 (転動) 方向および横方向に対し、個々の曲線があります。 両方向で、タイヤがスリップする量を最初に決定します (タイヤのゴムと道路間の速度差に基づきます)。 次に、接点にかけられるタイヤの力を突き止めるのにこのスリップ値が使用されます。
曲線は、入力として、タイヤのスリップを測定し、出力として、力を供給します。 この曲線は、2 片のスプラインによって近似化されます。 1 つ目の片が(0 , 0)から(ExtremumSlip , ExtremumValue)に移ります。この点では、曲線の接線が 0 になります。 2 つ目の片が(0 , 0)から(ExtremumSlip , ExtremumValue)に移ります。この点では、曲線の接線が 0 になります。

ホイールの摩擦曲線の一般的な形状
ゴムは延長することで、スリップを補正するので、現実のタイヤのプロパティは、低スリップの場合、強い力を発揮します。 後でスリップが非常に高くなった場合、タイヤが滑ったり、回ったりし始めた時に力が減ります。 そのため、タイヤの摩擦曲線は、上記画像のような形状になります。
| Extremum Slip/Value | 曲線の極値点。 |
| Asymptote Slip/Value | 曲線の漸近線点。 |
| Stiffness | Extremum ValueおよびAsymptote Valueに対する乗数 (デフォルトは 1)。 摩擦の剛性を変化させます。 これを 0 に設定すると、ホイールからのすべての摩擦が完全に無効になります。 通常、ランタイム時に剛性を修正して、スクリプティングから各種地盤材料をシミュレートします。 |
ヒント
- より安定した車両の物理特性を得るため、特に高い速度を達成できるレーシング カーの場合に、Time Manager で、物理特性タイムスタンプの長さを減らしたい場合があります。
- 車両があまりにも簡単に転倒するのを防ぐには、スクリプトから、その Rigidbody 質量中心を若干下げ、車両の速度に応じた下方への圧力''を加えます。
comp-GameObjectGroup
GameObject は、その他すべての Components を格納する容器です。 ゲーム内のすべてのオブジェクトはすべて異なるコンポーネントを含む GameObject になります。 技術的には、GameObject なしでコンポーネントを作成できますが、GameObject に追加するまでは使用できません。
Page last updated: 2012-11-13class-GameObject
GameObject は、その他すべての Components を格納する容器です。 ゲーム内のすべてのオブジェクトはすべて本質的に、GameObject になります。

空の GameObject
GameObject の作成
GameObject は、自身でゲームに特性を追加しません。 むしろ、実際の機能を実行するコンポーネントを格納する容器として機能します。 例えば、Light は、GameObject に追加されるコンポーネントです。
スクリプトからコンポーネントを作成したい場合、 空の GameObject を作成し、gameObject.AddComponent(ClassName)関数を使って、必要なコンポーネントを追加します。 コンポーネントを作成し、オブジェクトからそれを参照することはできません。
スクリプトから、メッセージの送信またはGetComponent(TypeName)関数により、コンポーネントは互いに簡単に通信することができます。 これにより、複数の GameObject に追加し、別の目的に再利用できる小さい、再利用可能なスクリプトを記述できます。
詳細
コンポーネントを格納する容器としての機能とは別に、GameObject は、Tag、Layer および Name を含みます。
タグ名を使用して、オブジェクトを素早く検索するためにタグが使用されます。 レイヤーを使用して、オブジェクトの特定のグループにのみ、光線を投射し、光をレンダリングまたは適用できます。 タグとレイヤーは、 にある Tag Manager で設定できます。
Static チェックボックス
Unity Staticと呼ばれる GameObject に新しいチェックボックスがあります。 このチェックボックスは次の目的に使用されます。
- 自動バッチ処理のためのスタティック ジオメトリの作成
- Occlusion Culling の計算

Static チェックボックスは、オクルージョン データの生成時に使用されます
オクルージョン データを生成すると、GameObject にStaticと表示され、Static オブジェクトの裏で見えないメッシュ オブジェクトを間引く (または無効にする)ことができるようになります。 そのため、シーンで動かない環境オブジェクトには、Static と表示される はずです。
Unity でのオクルージョン カリングの機能の詳細については、Occlusion Culling ページを参照してください。
ヒント
- 詳細については、GameObject scripting reference page を参照してください。
- レイヤーの使用法については、here を参照してください。
- タグの使用法については、here を参照してください。
comp-ImageEffects
このグループはすべての Render Texture ベースの全画面画像後処理効果を処理します。 Unity Pro 内でのみ使用できます。 創作に多くの時間をつぎ込むことなく、ゲームのルック アンド フィール を多くのものを追加します。
全ての画像効果は、Unityの OnRenderImage 関数で作られています。これにより、カメラに付いているどんなMonoBehaviorでも、様々なカスタム効果と一緒に利用できるように、上書きできます。
画像効果はopaque描画パスの後、またはopaque & transparent描画パス(デフォルトではこちら)の後に実行されます。 前者の振る舞いはとても簡単で OnRenderImage 関数に ImageEffectOpaque の属性を付けるだけでできます。 この効果の例としては、 Edge Detection effect をご覧ください。
- Antialiasing
- Bloom
- Camera Motion Blur
- Depth of Field
- Noise And Grain
- Screen Overlay
- Color Correction Lookupテクスチャ
- Bloom and Lens Flares
- Color Correction Curves
- Contrast Enhance
- Crease
- Depth of Field 3.4
- Tonemapping
- Edge Detectエフェクト 法線マップ
- Fisheye画像効果
- Global Fog
- Sun Shaft
- Tilt Shift
- Vignetting (および Chromatic Aberration)
- Blur
- Color Correction画像効果
- Contrast Stretch画像効果
- Edge Detect効果
- Glow画像効果
- Grayscale 画像効果
- Motion Blur 画像効果
- Noiseイメージエフェクト
- Sepia Toneイメージエフェクト
- スクリーンスペース アンビエントオクルージョン(SSAO)イメージエフェクト
- Twirl 画像効果
- Vortex 画像効果
上記ページで使用されるシーンは、画像効果を適用することなく、このような見た目になります。

画像後処理効果なしのシーン
複数の画像効果を、同じカメラでスタックされます。 ただ追加するだけで、機能します。

同じカメラに追加されたぼやけとノイズ
script-AntialiasingAsPostEffect
Antialiasingは、グラフィックにスムーズな外観を与えるために設計されたアルゴリズムのセットを提供しています。異なる色の2つの画像が隣接するとき、ピクセルの形状は境界に沿って、特徴的な階段状の模様を形成してしまいます。この効果はエイリアスとして知られ、Antialiasingはこの効果を減少させるための手法を意味します。

左の立方体はAntialiasingなしでレンダリングされ、右の立方体はFXAA1PresetBアルゴリズムを使用しています
Antialiasing アルゴリズムは、画像ベースのアルゴリズムで、従来のマルチサンプリングが適切にサポートされていない遅延レンダリングに非常に便利です。 現在サポートされているアルゴリズムは NVIDIA の FXAA, FXAA II, FXAA III (調整可能かつ、据え置きゲーム機向け)、ローカルの縁のみをぼやけさせる (NFAA, SSAA)、および長い縁にも対処する DLAA アルゴリズム、のみです。 SSAA は最速の手法で、その次に、NFAA、FXAA II、FXAA II、DLAA およびその他の FXAA があります。通常、Antialiasingの品質はアルゴリズムの処理速度とのトレードオフとなりますが、アルゴリズムの選択がほぼ影響がない状況があるかもしれません。
特に据え置きゲーム機およびNaClに興味がある方は FXAA IIIの実装は、品質とパフォーマンスのトレードオフについて上手にバランスをとり、シャープにするかぼやかすのか、調整することが出来ます。
他のイメージエフェクト 同様、エフェクト機能はUnity Proのみであり利用可能とするためにはプロスタンダードアセット(Pro Standard Assets )をあらかじめインポートする必要があります。
プロパティ
| AA Technique | 使用アルゴリズム |
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(3.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2004年以降のNVIDIAグラフィックスカード(GeForce 6)、2005年以降のAMDグラフィックスカード(Radeon X1300)、2006年以降のIntelグラフィックスカード(GMA X3000)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-26script-Bloom
Bloomingとは明るい光源(例えば、閃光のような)からの光が周囲の物体に漏れるように見える光学効果である。Bloomエフェクトはブルームを追加したうえで、さらに非常に効率的な方法でレンズフレアを自動生成します。ブルームはシーンに大きな違いを生む効果であり、とHDR レンダリングと一緒に使用して、魔法や夢の中の世界のような印象を与える非常に特徴的な効果である。一方、適切な設定が与えることで、この効果を利用した写真のようなリアリティを向上させることも可能です。非常に明るいオブジェクトの周りの輝きは、輝度が大幅に異なる場合のよくある現象として、映画や写真のなかで観察されます。BloomはGlow エフェクトおよび Bloom And Flares エフェクトの拡張した機能です。

HDRで適切な輝度をBloomエフェクトで再現した例。このシーンでは、BloomHDRでしきい値1.0を設定していて、反射、ハイライトや発光面は輝度を増しますが、一般的なライティングは、全般に影響を受けません。この特定の例では、車窓だけが輝いています。(HDRの日光反射による)

Bloomエフェクトによって作成されたAnamorphic Lens Flaresの例
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。
プロパティ
| Quality | Highで高品質、高周波数を維持し、エイリアシング低減します。 |
| Mode | 高度なオプションを表示するにはComplexモードを選択します。 |
| Blend mode | カラーバッファにBloomを追加するために使用するメソッド。柔らかいScreenモードは、明るい画像のディテールを保持するのに良いですが、HDRでは動作しません。 |
| HDR | BloomがHDRバッファを使用しているかどうかピクセル強度が、[0,1]の範囲から出るかもしれないので、異なった見た目になる。詳細はトーンマッピング とHDR を参照して下さい。 |
| Cast lens flares | スクリーンベースの自動レンズフレア生成を有効または無効にします。 |
| Intensity | 追加されたライトのグローバルライト強度(Bloom、Lens Flaresに影響します)。 |
| Threshhold | 画像領域のなかでこのしきい値より明るい領域はブルーム(また潜在的にLens Flare)の影響を受けます。 |
| RGB Threshhold | R、G、Bのために選んだ異なるthreshholds |
| Blur iterations | ガウス ブラー(ぼかし)が適用される回数。反復回数が多いほど平滑性を向上させますが、周波数ノイズを隠すなどさらに処理時間がかかります。 |
| Sample distance | ブラー(ぼかし)の最大半径。パフォーマンスには影響しません。 |
| Use alpha mask | アルファチャネルがBloomエフェクトのマスクとして機能する程度。 |
| Lens flare mode | Lens Flareのタイプ。オプションはGhosting、Anamorphic、あるいはふたつの組み合わせです。 |
| Local intensity | Lens Flareのためだけに使用するローカル強度。0にするとLens Flareは完全に無効。 |
| Local threshold | 画像の部分がLens Flareを受けるかを定義する累積ライト強度のしきい値。 |
| Stretch width | Anamorphic Lens Flareの幅。 |
| Rotation | Anamorphic Lens Flareの回転 |
| Blur iterations | Anamorphic Lens Flareにブラー(ぼかし)を適用する回数。反復回数が多いほど平滑性を向上させますが、さらに処理時間がかかります。 |
| Saturation | Lens Flareを飽和/(非飽和)させる。0を指定した場合、Lens Flareは完全にTint Colorになります。 |
| Tint Color | Anamorphic Lens Flareのカラー調整 |
| 1st-4th Color | Ghosting or Combinedが選択されたときのすべてのLens Flareのカラー調整 |
| Lens flare mask | 画面の端でのLens Flareにより画像の乱れを防ぐために使用されるマスク。 |
ブレンドモード:AddおよびScreen
ブレンドモードはオーバーレイ時に2つの画像を合成する方法を決定します。ベースの画像から各ピクセルは、オーバーレイ画像での対応する位置のピクセルと数学的に組み合わせます。UnityのイメージエフェクトはAdd、Screen二つのブレンドモードが用意されています。
Addモード
画像がAddモードでブレンドされている場合、カラーチャネル(赤、緑、青)の値は、単純に足し合わされ、最大でも値は1となります。全体的な効果として最終的には、明るくない画像の領域を最も明るい部分に溶け込むことができます。最終的なイメージは色やディテールを失うため、Addモードは、まぶしいホワイトアウト効果が必要な場合に便利です。
Screenモード
白いスクリーン上に同時2つのソース画像を投影する効果をシミュレートしているので、Screenモードと命名されています。各カラーチャネルは、別々に同じ値を合成します。まず、2つのソースピクセルのカラーチャネルの値が反転されます。(すなわち1から値を引き算します)次に、反転した2つ値が乗算され、その結果が反転されます。結果は、2つのソースピクセルのどちらよりも明るくなる一方で、最大の明るさになるために元の色のどちらか一つも最大の明るさであった場合のみになります。全体的な効果としては、ソース画像より多くのカラーバリエーションとディテールが保持されるため、Addモードよりも緩やかな効果につながります。
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-17script-CameraMotionBlur
Motion Blurはカメラシステムの"光"が時間をかけて集積される(離散的なスナップショットをとるだけではない)事実をシミュレートした一般的なエフェクトです。速いカメラや物体の動きは、ぼやけたイメージを生成します。

カメラが横に移動している標準的なカメラのMotion Blurの''例。また、背景領域が前景領域ほどをぼやけないという、Motion Blurの典型的な副作用に注目してください。
現在のMotion Blurの実装では、特定の層を(特に、それらのカメラの動きをフォローしている場合に、キャラクターと動的オブジェクトを除く場合に便利)を除外するオプションを使用してカメラの動きによるブレをサポートしています。しかし追加のスクリプトを用意して、各オブジェクトモデルのマトリクスを保持して速度バッファを更新すれば場合、動的なオブジェクトをサポートするように拡張することができます。

動的オブジェクト(ドラム缶、バス)を除したCamera Motion Blurの例
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。
プロパティ
| Technique | Motion Blurのアルゴリズム。通常はReconstructionフィルタにより最高の品質が得られパフォーマンスが犠牲となり、DirectX11 対応のグラフィックスデバイスが使用されない限り、Blur(ぼかし)の半径は10ピクセルに限られます。 |
| Velocity Scale | より高いScaleにすることで、画像をぼかしやすくなります。 |
| Velocity Max | |Blur(ぼかし)が行われる最大ピクセル距離およびReconstructionフィルタのタイルサイズ(下記参照)。 |
| Velocity Min | |Blur(ぼかし)が完全におこなわれない最小のピクセル距離。 |
| Camera Motion のプロパティ: |
| Camera Rotation | カメラの回転によるBlur(ぼけ)のScale強度。 |
| Camera Movement | カメラの移動によるBlur(ぼけ)のScale強度。 |
| Local Blur、 Reconstruction および ReconstructionDX11 のプロパティ: |
| Exclude layers | このレイヤー内のオブジェクトは影響を受けません。 |
| Velocity downsample | 速度バッファを低解像度にすることで、パフォーマンスが向上するものの、著しくBlur(ぼかし)ーの品質が低下します。単純なシーンでは有効なオプションかもしれません。 |
| Sampler Jitter | ノイズを追加するとReconstructionフィルタによるゴーストの発生を防止するのに役立ちます。 |
| Max Sample Count | ぼかしを決定するために使用されるサンプルの数。パフォーマンスに大きく影響を与えます。 |
| Preview (Scale) | 人為的にカメラモーションの値を設定したことでみえるBlur(ぼかし)をプレビューします。 |
Motion Blurフィルタ(テクニック)
Local Blurは、現在のピクセル速度に沿って指向性Blur(ぼかし)を実行します。本質的にgather(収集)操作であるため、幾何学的に単純(例えば広大なTerrain地形)であるとき、大きなBlur(ぼかし)半径のときなど、リアリティがそれほど重要でないとシーンに適しています。一つの欠点は、フォーカスを絞った背景領域にBlur(ぼかし)されたオブジェクト適切に重ねることができないということです。もう一つの欠点は除外オブジェクトがBlur(ぼかし)された領域を汚すことです。

カメラが横向きに移動していて、前景(上)または背景をが除外対象である場合のLocal Blurテクニックの例。さきに述べた物体の両方に適用され、一般に画質劣化が起きることに注意してください。実際に使用するケースでこのことが大きな問題でない場合は、このMotion Blurテクニックが迅速かつ効果的なオプションです。
Reconstructionフィルタは、より現実的なBlur(ぼかし)を生むことができます。Reconstrution(再構築)という名前の由来は、指定された色と深度バッファで利用可能な情報がない場合でも、フィルタが、背景を推定しようとすることに由来しています。結果としては、収集フィルターがより高い品質を生みLocal Blurの欠点を回避することがあります(例えば適正なオーバーラップを生成する、など)。
機能はA Reconstruction Filter for Plausible Motion Blur(妥当なモーションブラーの再構築フィルタ)(http://graphics.cs.williams.edu/papers/MotionBlurI3D12/ )の論文にもとづいてます。 アルゴリズムは、Velocity Maxの大きさのタイルに画像を分割しその領域の最大速度を用いてぼやけたピクセルを近くの領域に散乱するシミュレーションを行います。前述のタイルのサイズが大きく、速度変化が激しい場合、画像の乱れが発生する可能性があります。
DirectX11 の排他フィルタ ReconstructionDX11 は任意のBlur(ぼかし)の距離(別名タイルサイズやVelocity Max)とサンプルの数を柔軟に設定できます。

カメラが横に移動している間にReconstructionテクニックを使用している例。今度は、Reconstrucionフィルタが問題を解消してくれるために前述の画像の乱れが少ない(背景を除外して立方体が重なったとき(下)、あるいれは除外されたキューブがBlur(ぼかし)された背景の領域を汚さない)。
上記のすべてのフィルタが速度バッファを生成する際にPre-passが必要である一方で、Camera Motionフィルターは、単にカメラの動きのみで動作します。カメラの変化に基づいてグローバルフィルタ方向を生成したうえで、その方向に沿って画面をBlur(ぼかし)します。(詳細についてはhttp://www.valvesoftware.com/publications/2008/GDC2008_PostProcessingInTheOrangeBox.pdf を参照してください)?? これは、特にFPSゲームで、例えば高速カメラの回転を滑らかにするのに適しています。

Camera Motionテクニックを使用した例。Blur(ぼかし)が、画面全体に均一であることに注意してください。
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(3.0)あるいはOpenGL ES2.0対応したグラフィックスカードおよびデプステクスチャ対応が必要です。対応デバイスについてPCでは2004年以降のNVIDIAグラフィックスカード(GeForce 6)、2005年以降のAMDグラフィックスカード(Radeon X1300)、2006年以降のIntelグラフィックスカード(GMA X3000)、モバイルではOpenGL ES2.0およびデプステクスチャ対応が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-26script-DepthOfFieldScatter
Depth of Fieldは、カメラレンズの特性をシミュレートした、一般的なエフェクトです。 この機能は過去バージョンのDepth of Field 3.4エフェクト より近代的かつ洗練されていて、HDR レンダリングやDirectX 11 互換のグラフィックスデバイスに特に適しています。
現実世界では、カメラから一定距離の被写体のみに鋭く焦点を当てることができ、カメラから近いまたは遠いオブジェクトは焦点がずれます。ぼかしはオブジェクトの距離に関する視覚的な手掛かりを与えるだけでなく、""Bokeh""(ボケ)はという用語は、画像の明るい部分の周りに焦点からずれて魅力的に表現される視覚的な画像の乱れのことです。一般的にBokeh(ボケ)の形状は円形、六角形、その他高次の二面体群である。
通常のバージョンでは円形の形状(円形のテクスチャサンプリングで生成)のみサポートされる一方で、DirectX 11 バージョンはBokeh Texture(ボケ テクスチャ)で任意の形状に崩すことができます
Depth of Fieldの例を次の画像で見ますが、焦点の当たった前景と焦点を外した背景が表示されています。
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。
プロパティ
| Focal Settings | |
| Visualize | カメラフォーカスを示すオーバーレイのカラー。 |
| Focal distance | ワールド空間でのカメラ位置から焦点面までの距離。 |
| Focal Size | 焦点のあたる領域を増やします。 |
| Focus on Transform | シーンでターゲットオブジェクトを使用してFocal distanceを決定します。 |
| Aperture | 焦点のあたる領域と焦点の当たらない領域の境界を定義するカメラのAperture(絞り)。この値をできるだけ高く保持することが推奨であり、そうしない場合はMax Blur Distanceの値が大きい場合など特にサンプリングで画像の乱れが発生する可能性があります。大きなAperture(絞り)値を設定することで、自動的にダウンサンプリングが行われ、適切なボケが生成されます。 |
| Defocus Type | 焦点の当たらない領域を生成するために使用されるアルゴリズム。DX11は効果的なBokeh(ボケ)に有効なテクニックであり、DiscBlurは従来の(収集して散布)手法でのBlur(ぼかし)をベースにしています。 |
| Sample Count | フィルタタップの量。大幅にパフォーマンスに影響を与えます。 |
| Max Blur Distance | フィルタのタップの最大距離。テクスチャキャッシュに影響し、この値が大きすぎる場合は、アンダーサンプリングによる画像の乱れを引き起こす可能性があります。4.0より小さい値に設定することで、十分な結果が得られます。 |
| High Resolution | フル解像度で焦点の当たらない処理を実行します。パフォーマンスに影響を与えますが、不要なノイズを低減し、より明確なボケ形状を作成するのに役立つかもしれません。 |
| Near Blur | 前景領域の重ね合わせが実行されますが、パフォーマンスコストが大きくなります。 |
| Overlap Size | 必要な場合、前景の重ね合わせ範囲を大きくします。 |
| DX11 Bokeh Settings | |
| Bokeh Texture | Bokeh(ボケ)形状を定義するテクスチャ。 |
| Bokeh Scale | Bokeh(ボケ)テクスチャのサイズ。 |
| Bokeh Intensity | Bokeh(ボケ)形状のブレンド強度。 |
| Min Luminance | この値より明るいピクセルだけがBokeh(ボケ)形状が投影されます。オーバードローをより合理的な量に制限するため、パフォーマンスに影響を与えます。 |
| Spawn Heuristic | 影響を受けるピクセルが、周波数チェックを通過した場合のみBokeh(ボケ)形状が投影されます。 0.1前後のしきい値が、パフォーマンスと映像品質の良好なトレードオフのようです。 |
!DirectX11とDiscBlurの比較

DX11ではスムーズな遷移が高解像度で可能です。(パフォーマンスコストは大きいですが)

DiscBlurによるテクスチャサンプリング手法の性質上、サンプリングによる画像の乱れが顕著にならないBlur(ぼかし)半径に制限があります。また、Bokeh(ボケ)形状は球状のみ可能です。
DirectX 11のBokeh(ボケ)スプラッティング
この強力な技術は適切な散乱を可能にしますが、フィルレートへの高い要求のため、慎重に使用する必要があります。プロパティSpawn Heuristic and Min Luminanceにより、いつ、どこでBokeh(ボケ)スプライトが配置されるか決定される。ピクセルは輝度、周波数チェックに合格しない場合、単純なボックスぼかしが代わりに使用されます。 Bokeh(ボケ)のスプライトと同じカーネル幅を使用するので、違いに気づくことが難しいほどの差です。
次の写真の道路は、明るいものでもなく、大きな周波数の変化をともなわいが、Bokeh(ボケ)の映像効果を台無しにすることなく、単純なボックスフィルタでぼかすことができることを示している。

小さなMax Blur Distanceの例

大きなMax Blur Distanceの例
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(3.0)あるいはOpenGL ES2.0対応したグラフィックスカードおよびデプステクスチャ対応が必要です。対応デバイスについてPCでは2004年以降のNVIDIAグラフィックスカード(GeForce 6)、2005年以降のAMDグラフィックスカード(Radeon X1300)、2006年以降のIntelグラフィックスカード(GMA X3000)、モバイルではOpenGL ES2.0およびデプステクスチャ対応が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-19script-NoiseAndGrain
Noise And Grainのエフェクトは、フィルムや写真の典型的なエフェクトをシミュレートします。この特殊なノイズは、特別なブレンドモードを使うため、画像のコントラストを向上させるためにも使用することもできます。典型的なノイズの再現にも用いることができ、低レベルの光ノイズやHalo(後光)を柔らかくしたり、HaloやBloomの境界線柔らかくします。
DirectX 11 実装はあらゆるテクスチャ読込と独立しているため、最新のグラフィックス ハードウェアに最適です。
通常バージョンでは、処理結果での不要な明るさの変更を防止するために平均0.5の輝度のノイズテクスチャを用いています。使用されるデフォルトのテクスチャは、この例にあたります。

このエフェクト例のスクリーンショット。注目したいのは、滑らかさ、そして明るい部分と暗い部分に主に描画されていること、それから独特の青い色合いを持っていることです。
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。
プロパティ
| DirectX11 Grain | 高品質なノイズや粒子を有効にします(DX11のみ)。 |
| Monochrome | グレースケールノイズのみ使用します。 |
| Intensity Multiplier | グローバル輝度の調整。 |
| General | すべての輝度範囲に均等にノイズを追加します。 |
| Black Boost | 特に低輝度のノイズを追加します。 |
| White Boost | 特に高輝度のノイズを追加します。 |
| Mid Grey | 前述の高レベルと低レベルのノイズの範囲の範囲を定義します。 |
| Color Weights | 追加でノイズに色合いをつけます。 |
| Texture | 非DX11モードで使用されるテクスチャ。 |
| Filter | テクスチャフィルタリング。 |
| Softness | ノイズやグレインの鮮明さを定義します。値が高いほどパフォーマンスが向上しますが、一時的にレンダリングターゲットを必要とします。 |
| Advanced | |
| Tiling | ノイズパターンのタイリング(非DX11テクスチャモードで個別にすべてのカラーチャンネル向けに調整することができます)。 |
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-19script-ScreenOverlay
Screen Overlay画像効果は、独特の見栄えやエフェクトを得るために、画面全体に異なるテクスチャをブレンドを作成する簡単??な方法です。

Overlayを用いて低品質のlight leak(光漏れ)エフェクトを作成した例
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。
プロパティ
| Blend Mode | テクスチャを適用するときのブレンドモード。 |
| Intensity | 重ねあわせるテクスチャがで適用され強度や不透明度。 |
| Texture | 重ね合わせるテクスチャそのもの。 |
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-26script-ColorCorrectionLut
Color Correction Lut(LutはLookupテクスチャの略)はエフェクトによりカラーの調整を行うために最適化された方法です。Color Correction Curves など使用して個々のカラーチャンネルを微調整するかわりに、ひとつのテクスチャのみ用いて画像を補正します。 ルックアップテクスチャのアドレス解決をするために、元の画像の色をベクトルとして使用してルックアップを行います。
長所は、パフォーマンスの向上されることと、全ての色変換を専門の画像操作ソフト(PhotoshopやGIMPなど)で定義して正確な結果が得られる、より専門的なワークフローとすることができることです。

中間色での色補正を適用したシンプルなシーン

ContrastEnhancedルックアップテクスチャを使用した'同じシーン
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。
プロパティ
| Based On | 補正画像を生成するために使用される3Dルックアップテクスチャの2D表現。 |
Lookupテクスチャの要件
2Dテクスチャ表現は、特定のレイアウトにて展開されたときにボリュームをもったテクスチャである必要があります(デプスをもった連続した画像断面のイメージ)。
次の画像は、画像のコントラストを効果的に高める展開されたテクスチャを例示しています。これはStandard Packagesに含まれています。

'画像は、16x16x16カラールックアップテクスチャ(LUT)を用いて、256x16の次元のテクスチャが表示されます。結果の品質が低すぎる場合、1024x32テクスチャがより良い結果となります。(メモリ消費が大きくなります)
テクスチャのインポートに必要な要件には、読み取り/書き込みサポートを有効にすることと、テクスチャ圧縮を無効にすることが含まれています。そうしない場合は、不要な画像の乱れが発生しやすくなります。
ワークフロー例
常に基本となるLookupテクスチャを準備できるようにし、これをその後の補正に使用するLUTを生成するためのベースとします。
- ゲームのスクリーンショットを撮ります
- Photoshopなどにインポートしたうえ、満足のいく結果になるまで色調整(コントラスト、明るさ、カラーレベル調整など)を適用します
- 中間色のLUTにも同じ手順を実行して、新しいLUTとして保存します
- エフェクトに新しいLUTを割り当てConvert & Applyを押す
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(3.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2004年以降のNVIDIAグラフィックスカード(GeForce 6)、2005年以降のAMDグラフィックスカード(Radeon X1300)、2006年以降のIntelグラフィックスカード(GMA X3000)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-19script-BloomAndLensFlares
Bloomとは明るい光源(例えば、閃光のような)からの光が周囲の物体に漏れるように見える光学効果である。Bloom and Lens FlaresエフェクトはBloomを追加したうえで、さらに非常に効率的な方法でLens Flaresを自動生成します。Bloomはシーンに大きな違いを生む効果であり、とHDR レンダリングと一緒に使用して、魔法や夢の中の世界のような印象を与える非常に特徴的な効果である。Bloom and Lens Flaresは、グローエフェクト の機能拡張でありレンダリングパフォーマンスを犠牲にBloomにより細かい制御を提供しています。
このバージョンは廃止されることに注意してください:より柔軟なBloomエフェクト が4.0で導入されています。

例ではBloom and Lens FlaresはScreenブレンドモードを使用して柔らかな光を与えてます。新しいアナモフィック Lens Flaresタイプは映画の感覚を呼び起こすのに役立ちます。
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。
プロパティ
| Tweak Mode | Lens Flaresなどの追加オプションについては、complexモードを選択します |
| Blend mode | カラーバッファにBloomを追加するために使用するメソッド。柔らかいScreenモードは、明るい画像のディテールを保持するのに良いですが、HDRでは動作しません。 |
| HDR | BloomがHDRバッファを使用しているかどうかピクセル強度が、[0,1]の範囲から出るかもしれないので、異なった見た目になる。詳細はトーンマッピング とHDR を参照して下さい。 |
| Cast lens flares | 自動Lens Flares生成を有効または無効にします。 |
| Intensity | 追加されたライトのグローバルライト強度(Bloom、Lens Flaresに影響します)。 |
| Threshhold | |画像領域のなかでこのしきい値より明るい領域はBloom(また潜在的にLens Flares)の影響を受けます。 |
| Blur iterations | ガウス ブラー(ぼかし)が適用される回数。反復回数が多いほど平滑性を向上させますが、周波数ノイズを隠すなどさらに処理時間がかかります。 |
| Blur spread | ブラーの最大半径。パフォーマンスには影響しません。 |
| Use alpha mask | アルファチャネルがBloom効果のマスクとして機能する程度。 |
| Lens flare mode | Lens Flaresのタイプ。オプションはGhosting、Anamorphic、あるいはふたつの組み合わせです。 |
| Lens flare mask | 画面の端でのLens Flaresにより画像の乱れを防ぐために使用されるマスク。 |
| Local intensity | Lens Flaresだけに使用するローカル強度。 |
| Local threshold | 画像の部分がLens Flaresを受けるかを定義する累積ライト強度のしきい値。 |
| Stretch width | アナモフィック Lens Flaresの幅。 |
| Blur iterations | アナモフィック Lens Flaresにブラー(ぼかし)を適用する回数。反復回数が多いほど平滑性を向上させますが、さらに処理時間がかかります。 |
| Tint Color | アナモフィック フレアタイプのカラー調整 |
| 1st-4th Color | Ghosting or Combinedが選択されたときのすべてのLens Flaresのカラー調整 |
ブレンドモード:AddおよびScreen
ブレンドモードはオーバーレイ時に2つの画像を合成する方法を決定します。ベースの画像から各ピクセルは、オーバーレイ画像での対応する位置のピクセルと数学的に組み合わせます。UnityのイメージエフェクトはAdd、Screen二つのブレンドモードが用意されています。
Addモード
画像がAddモードでブレンドされている場合、カラーチャネル(赤、緑、青)の値は、単純に足し合わされ、最大でも値は1となります。全体的な効果として最終的には、明るくない画像の領域を最も明るい部分に溶け込むことができます。最終的なイメージは色やディテールを失うため、Addモードは、まぶしいホワイトアウト効果が必要な場合に便利です。
Screenモード
白いスクリーン上に同時2つのソース画像を投影する効果をシミュレートしているので、Screenモードと命名されています。各カラーチャネルは、別々に同じ値を合成します。まず、2つのソースピクセルのカラーチャネルの値が反転されます。(すなわち1から値を引き算します)次に、反転した2つ値が乗算され、その結果が反転されます。結果は、2つのソースピクセルのどちらよりも明るくなる一方で、最大の明るさになるために元の色のどちらか一つも最大の明るさであった場合のみになります。全体的な効果としては、ソース画像より多くのカラーバリエーションとディテールが保持されるため、Addモードよりも緩やかな効果につながります。
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-26script-ColorCorrectionCurves
Color Correction Curvesは、各カラーチャンネルの曲線を使用して色調整を行います。Depthベースの調整で、カメラからピクセルの??距離に応じて色の調整を変化させることができます。例えば、景観上のオブジェクトは、通常、大気中の粒子の散乱効果により、距離が離れるとSaturation(彩度)が落ちます。
選択的な調整を適用することもできるため、シーン上のターゲットの色を別の色に入れ替えることができます。
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。
プロパティ
| Mode | AdvancedとSimpleのいずれかから設定モードを選択します。 |
| Red | レッドチャンネルカーブ |
| Green | グリーンチャンネルカーブ |
| Blue | ブルーチャンネルカーブ |
| Red (Depth) | Depthベース補正に基づいたレッドチャンネルカーブ |
| Green (Depth) | Depthベース補正に基づいたグリーンチャンネルカーブ |
| Blue (Depth) | Depthベース補正に基づいたブルーチャンネルカーブ |
| Blend Curve | 前景と背景の色補正が行われる間のブレンド方法を定義します。 |
| Selective Color Correction | |
| Enable | オプションであるselective color correction(選択的色補正)をオンにします。 |
| Key | selective color correction(選択的色補正)のキーカラー |
| Target | selective color correction(選択的色補正)のターゲットカラー |
カーブの理解
画像を補正するにあたりカーブは強力なツールであり、コントラストの増減や色合いの追加、幻覚的なカラーエフェクトを追加するのに用いることができます。カーブは、赤、緑、青の各カラーチャンネルで別々に動作し、入力した輝度(例えば元のピクセルの輝度)を選択した出力レベルにマッピングすることができます。 入力と出力レベルの関係は単純なグラフに表示することができます:-

横軸は入力レベル、縦は出力レベルを表します。ライン上の任意の点は、与えられた入力に対してマッッピングされた出力レベルを示します。デフォルトではカーブが対角線であるため、入力値と出力値は完全一致し、ピクセルは変更されない。しかし、カーブを再描画して輝度レベルを好きなように再マッピングすることができます。簡単な例は線が左上から右下に斜めに下がる例です。:-

この場合、ピクセルの輝度は反転され、0%が100%にマッピングされ、75%は25%にマップされます。すべてのカラーチャネルに適用した場合、その画像は、元の写真のネガのようになります。
コントラスト
画像のディテールは元の色にかかわらず、それらの色のピクセル間の輝度レベルの差によるところが大きな要素です。約2%以下の輝度の差であるピクセルはほとんど区別がつかなくなるが、この値より差が大きい場合は、値の差が大きいほどディテールの印象に差が出ます。画像内の輝度値のスプレッドは、コントラストと呼ばれます。
もし浅いスロープがカーブとして使用されている場合、より高い勾配で左下から右上に直線の場合と異なって、値の範囲はせばめられた出力の値となります。

結果的にコントラストが減少する効果が得られ、それは出力におけるピクセルの値が入力の値より差分が小さくなります。(たしかに、2つのわずかに異なる入力値の場合は同 じ出力値にマッピングされることはありえますが)画像が出力の値がとりうるすべての値の範囲をとることはないため、カーブの上を上下にスライドして画像が全体的に明るい・暗いことが可能であることに留意してください。(平均的な輝度はsit地点と呼ばれることがあり、テレビなどで輝度を調整するパラメータと同じです)コントラストを減らすとシーン内で、全体的な輝度にもよって、暗がり、霧や眩しい光源の印象を与えることができます。
輝度のレベルの全範囲にわたってコントラストを低下させる必要はありません。カーブの傾きが減少し、コントラストの範囲に対応する浅い部分と、その位置によって変化させることができる。浅い部分の区間でデフォルトより増加した勾配とすることが可能で、この区間ではコントラストが増加します。このようにカーウを変更することで、画像の一部だけコントラストを増加させてディテールが重要でないところでは減らすということが出来ます。

カラーエフェクト
カーブは、各カラーチャンネル(赤、緑、青)ごとに全く同じに設定されている場合、変更されるのは主にピクセルの輝度となり、カラーは比較的変わらない。一方でカーブが各カラーチャネルごとに別々に設定している場合は、その色が劇的に変化します。カラーチャンネル間で複雑な相互作用が可能なですが、いくつかの知見を、次の基本図から得ることができます:-

上記のセクションで説明したように、コントラストの低下は、全体の輝度の増加または減少を伴うかもしれません。例えば赤のチャンネルを明るくすると、画像で赤の色合いが見えるようになります。暗くした場合、画像はシアンの色合いになります(なぜなら、この色が他原色の緑と青を組み合わせることによって得られるためです)。
Depthベースの色補正
距離を置いて見たときの色はしばしば若干異なって表示されます。例えば、風景のシーンで、色は大気中の光の散乱により彩度が落ちる傾向があります。この種のエフェクトはDepthベースの色補正で作成できます。これが有効になっていると、2つのセットのカラーカーブが使用可能となり、ひとつがカメラのニアクリッピング面、もうひとつがファークリッピング面です。2つのクリッピング平面の間の正規化された距離は、2つのカラーカーブのセット間の補間パラメータとして使用されます。補間の具体的な種類は追加のブレンドカーブにより指定され、これにより補間値が正規化された距離にカーブで入力値から出力値をマップするのと同じ方法でマッピングされます。デフォルトで、カーブは対角線であるため、2つの色補正の間は線形補間となります。しかし、それは距離に応じてバイアス補正に変更することができます。
選択的な色補正
この設定を使用して、元の画像( "キー"とも呼ばれる)内の特定の色を置き換えることが可能で、選択したターゲット色と交換することができます。キーのために単一の色を使用すると、画像の乱れが出る傾向があるので、色の範囲が代わりに使用されます。得られた色は、元の画像のピクセルが指定されたキーの色にどれだけ近いかにより決まり、キー色とターゲット色の間の補間となります。
カーブの修正
インスペクタでのカーブのいずれかをクリックすると、編集ウィンドウが開きます: -

ウィンドウの一番下には、カーブのプリセットが複数用意されてます。しかし、キーのポイントを操作することで、カーブを変更することもできます。カーブの線を右クリックすると、新しいキーポイントを追加でき、さらにマウスを使ってドラッグすることができます。ポイントのいずれかを右クリックした場合、コンテキストメニューが表示され、カーブを編集するオプションが用意されています。キーを削除することができるだけでなく、カーブの形状にどのように影響するかを決定する4つのオプションがあります:-
- Auto: カーブがポイントの地点を通過したうえで隣接する点がスムージングされます。
- Free Smooth: カーブの接線がキーポイントにつけられたハンドルにより編集可能となります。
- Flat: Free Smoothモードがオンとなり、接線は縦方向にセットすることができます。
- Broken: キーポイントは、Free Smoothモードと同様に接線ハンドルがありますが、それに加えてカーブの左右にあり、カーブを別々に動かしてスムージングではなく、急な変化を描くことができます。
これらのオプションの下にいくつかの設定でポイントの接線ハンドルの動作を制御することが出来ます:-
- Free: 特定のカーブ上の接線でBrokenモードが有効になっています。
- Linear: キーポイントの間のカーブおよび隣接するポイントは直線に設定されます。
- Constant: 平らな水平線がカーブから隣接する点に対して描画され、垂直は急激な変位がステップとして発生します。
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードおよびデプステクスチャ対応が必要です。対応デバイスについてPCでは2004年以降のNVIDIAグラフィックスカード(GeForce 6)、2004年以降のAMDグラフィックスカード(Radeon 9500)、2006年以降のIntelグラフィックスカード(GMA X3000)、モバイルではデプステキスチャ対応したOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-26script-ContrastEnhance
Contrast Enhance画像効果は特定のカメラによるコントラストを改善します。画像処理アプリケーションでよく知られているunsharp mask(アンシャープマスク処理)を用いてます。
画像にぼかしが適用された場合、隣接するピクセルの色は一定の範囲で均一化され、シャープエッジが緩和されます。しかし、平坦な色の領域は比較的変わりません。アンシャープマスクの背後にある考え方は、画像自体のぼかした("シャープ"でない)画像と比較する、ということです。ピクセルの間の明るさの違いはオリジナルと対応するぼかした画像のピクセルは、そのピクセルが隣接するピクセルに対してどれだけのコントラストがあるかを示しています。そのピクセルの明るさは、ローカルコントラストに比例して変更されます。ピクセルがぼかしの結果暗くなったピクセルは隣り合わせのピクセルよりも明るいはずであり、明るさはさらに増して、もしピクセルがぼかしの後に暗くなった場合はさらに暗くなります。この効果で、詳細がもっとも顕著である画像の領域でコントラストが大きくなります。アンシャープマスクのパラメータは色がぼかしされるピクセル半径であり、明るさが変更ある度合いは以下のコントラストの"threshold"(しきい値)によって決定され、この値より低い場合は明るさが変更されません。
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。

Contrast Enhanceなしの場合

Contrast enhanceありの場合
プロパティ
| Intensity | Contrast Enhanceの強度。 |
| Threshhold | Contrast Enhanceのしきい値、この値以下では適用されない。 |
| Blur Spread | コントラストの比較を行うことになる半径。 |
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-19script-Crease
Crease は、可変的にサイズ変更される外郭を作成することで、オブジェクトの可視性を強化する一般的な非写実的 (NPR) レンダリング手法です。
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。

クリース シェーディングは、外郭に基づいた深度を作成します。
プロパティ
| Intensity | クリース シェーディングの輝度。 |
| Softness | 適用されたクリース シェーディングの円滑さと柔らかさ。 |
| Spread | 外側の厚みを決定するためのぼやけ半径。 |
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードおよびデプステクスチャ対応が必要です。対応デバイスについてPCでは2004年以降のNVIDIAグラフィックスカード(GeForce 6)、2004年以降のAMDグラフィックスカード(Radeon 9500)、2006年以降のIntelグラフィックスカード(GMA X3000)、モバイルではデプステキスチャ対応したOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-13script-DepthOfField34
Depth of Fieldは、カメラレンズの特性をシミュレートした、一般的なエフェクトです。名前は効果がUnity 3.4で追加されたという事実を指しますが、この機能はDepth Of Field Scatter により取って代わられており、新機能は近代的かつ洗練されていて、最適化された映像テクニックを用いてレンズBlur(ぼかし)をシミュレートし、焦点領域間の移行がスムーズになっています。しかし、使用する状況もよるが、もともと古いハードウェアのために開発さた機能であるためにパフォーマンスは旧3.4バージョンの方が大分良いかもしれません。
現実世界では、カメラから一定距離の被写体のみに鋭く焦点を当てることができ、カメラから近いまたは遠いオブジェクトは焦点がずれます。ぼかしはオブジェクトの距離に関する視覚的な手掛かりを与えるだけでなく、Bokeh(ボケ)という用語は、画像の明るい部分の周りに焦点からずれて魅力的に表現される視覚的な画像の乱れのことです。
Depth of Fieldの例を次の画像で見ますが、焦点の当たった前景と焦点を外した背景が表示されています。前景のBlur(ぼかし)が他の領域と重なる一方で、背景は重なっていないことに注目してください。

近くパイプだけが焦点の当たる領域にある
Attach:ImageEffects./DofExample2.png Δ
Depto of Fieldを用いて、前景と背景のBlur(ぼかし)比較
Tilt Shift effect を用いることで、もっとストレートでそこまで洗練はされていないDepth of Fieldエフェクトを得ることも、検討の余地があります。
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。
プロパティ
| General Settings | |
| Resolution | 内部的なレンダリング ターゲットサイズを決定します。低い解像度では、高速なレンダリングとなり、メモリ要求も少なくなります。 |
| Quality | 品質レベル。より処理が速いOnlyBackgroundか、 より品質が高いBackgroundAndForegroundでDepth of Fieldをそれぞれの領域で別計算するか、のいずれかを選択してください。 |
| Simple tweak | シンプルな焦点のモデルに切り替えます。 |
| Visualize focus | 学習やデバッグ支援のため、ゲームビューで指定する焦点を当てる領域を示しています。 |
| Enable bokeh | 現実世界に近いレンズBlur(ぼかし)が生成され、非常に明るい部分がスケールされ重ね合わせされる。 |
| Focal Settings | ||
| Focal distance | ワールド空間でのカメラ位置から焦点面までの距離。 | |
| Object Focus | シーンでターゲットオブジェクトを使用してFocal distanceを決定します。 | |
| Smoothness | 焦点の当たらない領域から焦点のあたる領域へ移行する滑らかさ。 | |
| Focal Size | 焦点のあたる領域のサイズ。 | |
| Blur | |
| Blurriness | さまざまなバッファをBlur(ぼかし)する際の反復回数(反復ごとに処理時間を消費します)。 |
| Blur spread | Blur(ぼかし)の半径。これは解像度に依存しないため、必要な解像度ごとに値を調整する必要があるかもしれません。 |
| Bokeh Settings | |
| Destination | 前景と背景のBlur(ぼかし)を有効にすると、レンダリング時間が増加しますが、より現実的な結果が得られます。 |
| Intensity | Bokeh(ボケ)形状が収集するにあたり、使用するブレンド強度。常に慎重に調整する必要がある重要な値です。 |
| Min luminance | 輝度のしきい値で、この値以下のピクセルにはBokeh(ボケ)が適用されません。 |
| Min contrast | コントラストのしきい値、それ以下のピクセルにはBokeh(ボケ)が適用されません。ここで重要なことは、通常だけ高周波の領域(すなわち、雑然としたり、うるさく感じる画像の領域)でのみBokeh(ボケ)形状が必要であるということです。不要なBokeh(ボケ)トの生成を避けるために、このパラメータを調整するとパフォーマンスが向上します。 |
| Downsample | Bokeh(ボケ)形状が収集するにあたり、使用する内部的なレンダリング ターゲットサイズ。 |
| Size | Bokeh(ボケ)の最大サイズ。焦点の当たらない領域の量(錯乱円)で調整されます。 |
| Bokeh Texture | Bokeh(ボケ)形状を定義するテクスチャ。 |
Bokeh(ぼけ)エフェクトはピクセルごとに三角形を描画することにより作成されるので、最適に調整されていない場合はフレームレートに大きな影響を与える可能性があることに注意してください。Adjust the Size、Min luminance、Min contrast、DownsampleおよびResolutionを調整してパファオーマンスを向上させます。Bokeh(ボケ)形状が適用される前に画面を暗くしているので、画像の乱れを除去するために適切なBlurrinessを設定する必要があります。
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(3.0)あるいはOpenGL ES2.0対応したグラフィックスカードおよびデプステクスチャ対応が必要です。対応デバイスについてPCでは2004年以降のNVIDIAグラフィックスカード(GeForce 6)、2005年以降のAMDグラフィックスカード(Radeon X1300)、2006年以降のIntelグラフィックスカード(GMA X3000)、モバイルではOpenGL ES2.0およびデプステクスチャ対応が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-26script-Tonemapping
Tonemapping(トーンマッピング)はHDR (ハイダイナミックレンジ)からLDR(ローダイナミックレンジ)へのカラー値のマッピングとして一般的に理解されています。これはUnityにおいて多くのグラフィックで任意の16ビット浮動小数点カラー値は旧来の8ビット値に[0,1]の範囲にマッピングされることを意味します。
TonemappingはカメラでHDR が有効となっていないと正しく機能しない。また推奨として、光源を通常のintensityの値より大きくし、幅広い範囲を有効活用すると良い現実世界と同様、Luminance(明るさ)で人の眼あるいはそのつくりで認識できるのは一定の範囲のみです。
TonemappingはHDRをオンにしたBloom画像効果 とセットで機能的に動作します。留意すべき点としてTonemappingの前にBloomを適用する必要があり、そうしないと高い範囲が失われるためです。一般に強いLuminance(明るさ)で効果が得られる処理はTonemappingより先に行う必要があります。(別の例がDepth of Field画像効果 です。)
intensityをLDRにマッピングする方法はいくつかあります(Modeにより選択できます。)このエフェクトはいくつかのテクニックがあり、そのうち二つは adaptive(AdaptiveReinhardおよびAdaptiveReinhardAutoWhite)で、その意味はカラーの変更がintensityの変化が完全に登録されるまで処理が遅延される、ということです。カメラと人の眼でこれと同じ現象があります。この効果により興味深いダイナミックなエフェクトが可能となり、暗いトンネルを抜けて明るい日差しに入るときの自然な目の慣れでも見られます。
次の2つのスクリーンショットはPhotographic Tonemapping(フォトグラフィック トーンマッピング)で異なるexposure(露光)値での例を示しています。バンディングがHDRカメラの使用により回避されていることに留意して下さい。


その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。
Properties
| Mode | 使用したいTonemapping(トーンマッピング)アルゴリズムを選択 |
| Exposure | シミュレートされた露光、実際の明るさのレンジを定義 |
| Average grey | シーンの平均的なグレー値であり、結果のintensityを定義 |
| White | 白にマッピングされる最低値 |
| Adaption speed | 全てのadapitiveであるTonemappingの調整速度 |
| Texture size | 全てのadapitiveであるTonemappingの内部テクスチャサイズ。大きな値によりintensityを計算するなかでディテールが得られる一方で、パフォーマンスが低下します。 |
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-20script-EdgeDetectEffectNormals
Edge Detect(エッジ検出)画像効果はシーンの物体を考慮したうえでエッジの周りに輪郭線を作成します。エッジはカラーの差ではなく表面法線や隣接するピクセルのカメラからの距離により判定されます(法線マップは「矢印」であり、特定のピクセルの位置において表面が向いている方向を示します)。一般に、2つの隣接するピクセルが明白に異なるほう線やカメラからの距離である場合、シーンにエッジが描画されます。
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。

Edge Detection(エッジ検出)の例。エッジの輪郭線がAntialias(アンチエイリアス)をEdge Detectionに続けて追加していることによりスムージングできていることに留意して下さい。
このエフェクトはImageEffectOpaqueプロパティを使用し、画像効果が透過レンダリングパスより先に実行しています。デフォルトでは、画像効果は不透明および透明パスが完全にレンダリングされた後に実行されます。
プロパティ
| Mode | フィルタの種類を選択(以下を参照) |
| Depth Sensitivity | 隣接するピクセル間の距離の差でエッジが描画される最低距離 |
| Normals Sensitivity | 隣接するピクセル間の法線の差でエッジが描画される最低距離 |
| Sampling Distance | より大きなサンプリング距離(デフォルトは1.0)により厚いエッジが作成されるが、Halo(ハロー)による画像の乱れが発生します。 |
| Edges exponent | Sobel(ソーベル)フィルタに使用される指数。小さい値であるほど、Depth差が小さい場合もエッジ検出します。 |
| Background options | |
| Edges only | 固定の色で背景をブレンド |
| Background | Edges only > 0の際に使用される色 |
フィルタの種類
新しいSobelDepthThinフィルタによりエッジ検出が他のDepthベースの画像効果(Depth of Field, FogあるいはMotion Blur)とともに機能します。理由はエッジがオブジェクトのシルエットを超えないためです。

エッジがフォーカスされてない背景に漏れておらず、同時に背景のBlur(ぼかし)は作成されたエッジを取り除きません。
Depthのみがエッジ検出に用いられていることに留意し、このフィルタはシルエットの中のエッジは無視します。
SobelDepthは同様に機能するが、オブジェクトのシルエットの外のエッジを無視しません。このため、エッジ検出はより精緻であるが他のDepthベースエフェクトとともにうまく機能しません。
DepthとNormalの両方からピクセルがエッジであるか判断するにも関わらずTriangleDepthNormalsはほとんどの場合に最も安いフィルタです。すなわち、オブジェクトのシルエット以外にも検出します。しかしながら、大量の法線マップの詳細があると、フィルタが機能しない場合があります。
RobertsCrossDepthNormalsはTriangleフィルタと同じプロパティを共有しますが、エッジ検出のためにより多くのサンプリングを行います。自然な副産物として、結果的にエッジが厚くなりがちです。
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードおよびデプステクスチャ対応が必要です。対応デバイスについてPCでは2004年以降のNVIDIAグラフィックスカード(GeForce 6)、2004年以降のAMDグラフィックスカード(Radeon 9500)、2006年以降のIntelグラフィックスカード(GMA X3000)、モバイルではデプステキスチャ対応したOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-20script-Fisheye
Fisheye画像効果はFisheye(魚眼)レンズのようなねじれを作成します。(もっとも他のレンズでも画像を引き伸ばしますが)
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。
Properties
| Strength X | 水平方向のねじれ |
| Strength Y | 垂直方向のねじれ |
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-22script-GlobalFog
Global Fog イメージエフェクトはカメラによる急激な霧を生成する。全ての計算はワールド座標系で行われるため高さを指定できる洗練されたエフェクトの霧が再現できる(図を参照のこと)

フォグ(霧)の例、距離および高さを指定している

簡便に大気に写ったフォグ
他の イメージエフェクト 同様、エフェクト機能はUnity Proのみであり利用可能とするためには Pro Standard Assets をあらかじめインポートする必要がある
Properties
| Fog Mode | 利用可能なフォグのタイプ、距離のみ、高さのみ、あるいは双方を指定できる |
| Start Distance | フォグがフェードイン開始する距離をワールド座標系の単位長さで指定 |
| Global Density | Fog Colorが互いに近づいて集まる距離を指定 |
| Height Scale | フォグの密度が減少する度合いを高さで指定(Fog Modeで高さが指定されている前提) |
| Height | フォグがフェードイン開始するワールド座標系のY座標 |
| Global Fog Color | フォグの色 |
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードおよびデプステクスチャ対応が必要です。対応デバイスについてPCでは2004年以降のNVIDIAグラフィックスカード(GeForce 6)、2004年以降のAMDグラフィックスカード(Radeon 9500)、2006年以降のIntelグラフィックスカード(GMA X3000)、モバイルではデプステキスチャ対応したOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-26script-SunShafts
Sun Shafts 画像効果は、後光や非常に明るい光源から放たれる光線など、放射状に散乱する光を作成するための強力なツールです。
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。

Sunshaftエフェクトの例
プロパティ
| Rely on Z Buffer | これは、深さテクスチャが使用できない場合または費用がかかりすぎて計算できない場合に使用できるオプションです (大量のオブジェクトでのフォワードレンダリングなど)。このオプションを無効にした場合、Sun Shafts 効果がカメラに適用する一番最初の効果とする必要があります。 | |
| Resolution | Shaftが生成されている解像度。 解像度が低いと、計算速度が上がり、結果がより柔らかくなります。 | |
| Blend Mode | より柔らかい「Screen」またはシンプルな「Add」操作のいずれかを選択します。 | |
| Sun Transform | Sun Shaft を投射する光源のTransform。Positionのみが有意です。 | |
| Center on ... | ゲーム ビュー カメラの中心からの光線に沿って、「Sun Transform」を移動させます。 | |
| Shafts color | シャフトの色合い。 | |
| Distance falloff | 「Sun Transform」から離れると共に、Sun Shaftの強度を下げる強度フォールオフ速度。 | |
| Blur size | ピクセルのカラーがBlur(ぼかし)により合成される半径。 |
| Blur iterations | Blur(ぼかし)の繰り返し回数。繰り返しが多いと、より円滑で長い光線が生成されますが、計算のコストも高くなります。 |
| Intensity | 追加されたSun Shaftの輝度。 |
| Use alpha mask | Sun Shaft生成時に使用する必要のあるカラーバッファのアルファ チャンネルの量。これは、例えばSkyboxに、マスクを定義する適切なアルファ チャンネルがある場合に便利です (例:Sun Shaftを妨げる雲などに対して)。 |
ブレンドモード:AddおよびScreen
ブレンドモードはオーバーレイ時に2つの画像を合成する方法を決定します。ベースの画像から各ピクセルは、オーバーレイ画像での対応する位置のピクセルと数学的に組み合わせます。UnityのイメージエフェクトはAdd、Screen二つのブレンドモードが用意されています。
Addモード
画像がAddモードでブレンドされている場合、カラーチャネル(赤、緑、青)の値は、単純に足し合わされ、最大でも値は1となります。全体的な効果として最終的には、明るくない画像の領域を最も明るい部分に溶け込むことができます。最終的なイメージは色やディテールを失うため、Addモードは、まぶしいホワイトアウト効果が必要な場合に便利です。
Screenモード
白いスクリーン上に同時2つのソース画像を投影する効果をシミュレートしているので、Screenモードと命名されています。各カラーチャネルは、別々に同じ値を合成します。まず、2つのソースピクセルのカラーチャネルの値が反転されます。(すなわち1から値を引き算します)次に、反転した2つ値が乗算され、その結果が反転されます。結果は、2つのソースピクセルのどちらよりも明るくなる一方で、最大の明るさになるために元の色のどちらか一つも最大の明るさであった場合のみになります。全体的な効果としては、ソース画像より多くのカラーバリエーションとディテールが保持されるため、Addモードよりも緩やかな効果につながります。
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードおよびデプステクスチャ対応が必要です。対応デバイスについてPCでは2004年以降のNVIDIAグラフィックスカード(GeForce 6)、2004年以降のAMDグラフィックスカード(Radeon 9500)、2006年以降のIntelグラフィックスカード(GMA X3000)、モバイルではデプステキスチャ対応したOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-18script-TiltShift
Tilt ShiftはDepth of Field エフェクトの特殊バージョンで、フォーカスのあたっている領域とフォーカスのあたっていない領域のスムーズな移行を可能としています。こちらのバージョンが使用が容易であり、一般に画像の乱れを発生しにくくします。しかしながら参照する画像に依存するため、わずかに処理オーバーヘッドが大きくなります。

Tilt Shiftの例。エフェクトの影響で全体的にスムーズになることを確認して下さい。
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。
プロパティ
| Focal Settings | |
| Visualize | ゲームビューのFocal Plane(焦平面)を緑色の色合いで表示します。(学習やデバッグに便利です) |
| Distance | カメラの位置からFocal Planeまでのワールド空間のユニット単位での距離 |
| Smoothness | フォーカス外からフォーカス内に遷移するときのスムーズさ |
| Background Blur | |
| Downsample | ほとんどの内部バッファをダウンサンプリングします。(これによりエフェクトは処理が早くなりますが、ぼやけやすくなります。) |
| Iterations | 背景(すなわちFocal Planeの奥にある全て)のBlur(ぼかし)を反復する回数 |
| Max Blur spread | フォーカスが当たらない領域がBlur(ぼかし)されるしきい値となる最大距離。フォーカス外の領域をさらにぼかします。 |
| Foreground Blur | |
| Enable | 前景のぼかしを有効にします。通常、より良い結果が得られますが、処理時間に悪影響を与えます。 |
| Iterations | 前景(すなわちFocal Planeの手前にある全て)のBlur(ぼかし)を反復する回数。反復ごとに処理時間が延びます。 |
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(3.0)あるいはOpenGL ES2.0対応したグラフィックスカードおよびデプステクスチャ対応が必要です。対応デバイスについてPCでは2004年以降のNVIDIAグラフィックスカード(GeForce 6)、2005年以降のAMDグラフィックスカード(Radeon X1300)、2006年以降のIntelグラフィックスカード(GMA X3000)、モバイルではOpenGL ES2.0およびデプステクスチャ対応が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-22script-Vignetting
Vignetting(ビネット)画像効果により暗くしたりぼかしたり、およびChromatic Aberration(色収差の補正)を画像の四隅に対して行います。これはカメラレンズを覗き込んだような効果を再現しますが、抽象的なエフェクトとしても使用できます。
Attach:ImageEffects./VignetteExample.png Δ
Vignetting and chromatic Aberrationの例。画面の四隅が暗くなり、色収差の補正により紫やわずかに緑のフリンジができることに留意下さい。
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。
プロパティ
| Vignetting | 画面の四隅を暗くする度合い。0をセットすると機能を無効化してパフォーマンスを高めます。 |
| Blurred Corners | 画面の四隅に与えるBlur(ぼかし)の量。0をセットすると機能を無効化してパフォーマンスを高めます。 |
| Blur Distance | Blur(ぼかし)フィルタを使用して画面の四隅をぼかす際のサンプリング距離 |
| Aberration Mode | AdvancedにセットするとAberrationエフェクトを強化します。Simpleにすると接線のAberrationを生成。(処理は四隅に限定されます) |
| Strength | 全体的なAberration強度(カラーオフセット距離と混同しないよう注意)デフォルトは1.0。 |
| Tangential Aberration | 接線方向のChromatic Aberrationの度合い。 |
| Axial Aberration | Axial Chromatic Aberrationの度合いです。画像平面の四隅までの距離を短くしても大きさが変化します |
| Contrast Dependency | この値が大きいほどAberration処理がトリガーされるしきい値となるコントラストが増します。高い値のほうが現実世界に近くなります。(この場合入力はHDRとすることが推奨です。) |
Advancedモード
Advancedモードは負荷が高いが、より現実世界に近いChromatic Aberrationが得られます。

AdvancedモードによりChromatic Aberrationのモデリングを強力にコントロールできます。別名で緑あるいは紫のフリンジとして知られていて、カメラ写真で一般的に知られている現象です(下図も参照下さい。)

カラーフリンジの拡大図。コントラストが大きい領域で緑や紫が表れていることに留意して下さい。このエフェクトは使用するカメラとレンズに依存します。この効果は異なる波長が異なる平面に投影されるという原理にもとづいています。
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-22script-BlurEffect
Blur エフェクトは、レンダリングされた画像をリアルタイムでぼやけさせます。
その他の image effects 同様、このエフェクトは Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。

シーンに適用された Blurエフェクト
プロパティ
| Iterations | 必要なぼやけの量 繰り返しが多いほど、画像のぼやけが増しますが、繰り返しを増やすたび、パフォーマンス コストが生じます。 |
| Blur Spread | この値が高いほど、同じ繰り返し数でもぼやけがより広がりますが、画質がある程度下がります。 速度と画質のバランスを取るには、通常、0.6 〜 0.7 の値を取ると良いでしょう。 |
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-26script-ColorCorrectionEffect
Color Correction(色補正)を使用すると、シーンに任意の色補正エフェクト(PhotoshopやGimpのカーブツールのような)を適用することができます。このページでは、Photoshopで色補正を設定してから正確にUnity実行時に「完全に同じ」色補正を適用する方法を説明します。
その他の image effects 同様、Color Correctionは Unity Pro 専用です。必ず、Pro Standard Assets をインストールしてください。

シーンにカラー補正を適用。使用したカラーランプ(拡大図)が右側に表示されています。

上記のイメージに使用されるカラーランプ。
Photoshopの色補正をUnityに取得
- ゲームの典型的なシーンのスクリーンショットを撮ります
- Photoshopで開き、色補正を行う。メニューでを開きます
- .acvファイルをダイアログからメニューでにて保存する
- Photoshopメニューで を開きます
- ここでランプイメージに色補正を適用します。メニューでを再び開き.acvファイルをロードします
- Unityで使用してるカメラを選択肢メニューでを選択して色補正効果を追加します。 変更したカラーランプを選択します。
- 再生された効果を見るためにPlayを押下します
詳細
色補正は、カラーランプイメージ(サイズ256x1)を通して元の画像の色を再マッピングすることにより動作します。
- (original.red + RampOffsetR) インデックスにて、result.red = ランプイメージのピクセルのred値
- (original.green + RampOffsetG) インデックスにて、result.green = ランプイメージのピクセルのgreen値
- (original.blue + RampOffsetB) インデックスにて、result.blue = ランプイメージのピクセルのblue値
だから例えば画像の色を反転する場合、元のカラーランプを水平に反転するのみです。(黒から白を、白から黒に変更するように)
より簡単な色の再マッピングを輝度に基づいて行いたい場合はGrayscale で行うことが出来ます。
ヒント:
- 色補正ランプイメージはミップマップを持つべきではありません。 Import Settingsでそれらをオフにします。またClampモードに設定する必要があります。
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-19script-ContrastStretchEffect
Contrast Stretchは、動的にそれに含まれる輝度レベルの範囲に応じて画像のコントラストを調整します。調整は一定の期間をかけて徐々に実施されるので、例えば暗いトンネルから出てきたときにプレイヤーが明るい屋外の光に短時間の間、目がくらむような現象が再現できます。同様に、明るいシーンから暗いシーンに移行するときに、 "目"は適応するまでしばらく時間がかかります。
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。
Contrast Stretchを理解する
画像のディテールの鮮明さは、主にそれに含まれる異なる輝度値の範囲によって決定されます。目は、異なる2つの明るさのが約2%未満差を区別することは困難で、差が大きいほどよりディテールを強く認識します。イメージ内の最も明るい値と最も暗い値の間の全体的な分離は、その画像のcontrast(コントラスト)と呼ばれています。
画像が利用可能な輝度の範囲を完全に使用しないのが通常です。コントラストを向上させる一つの方法は、範囲をより有効に活用するようにピクセルの値を再配置することです。元画像の最も暗いレベルは、さらに暗いレベルに再マップされ、最も明るいレベルはさらに明るいレベルに再マップされ、間にあるすべてのレベルは比例するようにしてより遠くに移動されます。レベルの分布は、次に利用可能な範囲に渡って遠い "伸ばして"配置されるため、この効果は、Contrast Stretch(コントラスト ストレッチ)として知られています。
Contrast Stretchは、目が異なる光の状況に適応する方法を示唆しています。屋外エリアから薄暗い建物に歩いて入室すると、 コントラストがディテールを明らかにするまでストレッチされ、ビューが短時間の間、不明瞭に表示されます。建物から出てきたときに、プレイヤーが明るい屋外の光に"目"が適応するまで短時間の間、目がくらむような効果がContrast Stretchにより発生します。

Contrast Stretch適用なし

暗いスカイボックスで適用されたContrast stretch。建物がより明るくなることに留意してください。

非常に明るいスカイボックスで適用されたContrast stretch。建物がより暗くなることに留意してください。
プロパティ
| Adaptation Speed | 移行の速度。この数値が低い場合、遷移に時間がかかる |
| Limit Minimum | 調整後の画像の中の、最も暗いレベル。 |
| Limit Maximum | 調整後の画像の中の、最も明るいレベル。 |
ヒント:
- Contrast Stretchは一定の期間にわたって適用されるので、効果全体は再生時のみ表示できます。
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-21script-EdgeDetectEffect
Edge Detect 画像効果は、色の差が閾値を越えるすべての箇所に黒いエッジを画像に追加します。
より洗練されたジオメトリベースのEdge Detectが必要な場合に関しては、Standard Assetsは normals and depth-based edge detection 効果も提供しています。
その他の image effects は Unity Pro でのみ使用できます。 必ず、Pro Standard Assets をインストールしてください。

シーンに適用された Edge Detection 効果
| Threshold | 隣接するピクセル間の色の差がこの値を超えると表示されるエッジ。 この値を増やすと、エッジがテクスチャやライティングの変更の影響を受けにくくなります。 |
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-26script-GlowEffect
Glow(場合によりBloomと呼ばれる)はレンダリングイメージを飛躍的に向上させ、明るすぎる部分を「輝かせる」(すなわち太陽、光源、強いハイライト)。 Bloom 画像効果はGlowに対してより大きな範囲でコントロールできるが処理オーバーヘッドが大きい。
その他の image effects 同様、この効果は Unity Pro 専用です。 必ず、Pro Standard Assets をインストールしてください。

シーンにGlow効果を適用
プロパティ
| Glow Intensity | Glow領域のなかで最も明るいスポットでの明るさ。 |
| Blur Iterations | 描画したときにBlur(ぼかし)する回数。各反復回数ごとに処理時間がかかる。 |
| Blur Spread | ピクセルが合成されぼかしを描画するピクセル距離 |
| Glow Tint | Glowに適用される色合い。 |
| Downsample Shader | Glowに用いるシェーダ。一般的には、これを変更する必要はありません。 |
詳細
Glowは、 最終的画像のアルファチャネルを用いて色の明るさを表現します。すべての色はRGBとして扱い、アルファチャネルを乗算します。アルファチャネルのコンテンツについてはScene View.で表示できます。
内臓のシェーダは次の情報をアルファチャネルに書き込みます。
- メインテクスチャのアルファをメインカラーのアルファに乗算した結果(ライティングは影響を受けません)
- Specular(鏡面)シェーダはSpecularハイライトにSpecularカラーのアルファを乗算した結果を追加します。
- Transparent(透過)シェーダはアルファチャネルを一切変更しません。
- パーティクルシェーダはアルファチャネルを修正せず、Particle/Multiplyだけはアルファtなっているすべてを暗くします。
- Skyboxシェーダはアルファチャネルに色合いのアルファを乗算します。
Glowで十分な効果を得るため、殆どの場合に実施すべきことは:
- マテリアルのカラーをゼロにセットするか、アルファチャネルがゼロのテクスチャをしようする。後者の場合、ゼロでないアルファをテクスチャに入れることでこの部分をGlowさせることが出来る。
- Specular(鏡面)シェーダのSpecularカラーアルファを100%に設定します。
- カメラがアルファチャネルをクリアした後にどう処理するかを考慮し(透明でない色にセットするか)、またSkyboxマチリアルが使用するアルファを考慮する。
- カメラにGlowの画像効果を追加します。Glow Intensity、Blur Iterationsの値を微調整するうえで、シェーダスクリプトのソースコードに含まれるコメントを参考にします。
- Skyboxのアルファチャンネルは太陽を見る場面においてGlowを追加するうえで大きな効果を期待できます。
ヒント:
- シーンビューのツールバーのレンダリングモードを使用してどのオブジェクトがアルファチャネルに異なる出力を行うか素早くみることが出来ます。
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-21script-GrayscaleEffect
Grayscale は、デフォルトで色をグレースケールに変えるシンプルな画像効果です。 また、Texture Rampテクスチャを使用して、照明を任意の色に再マッピングできます。
すべての image effects のように、Grayscale は Unity Pro でのみ使用できます。 必ず、Pro Standard Assets をインストールしてください。

シーンに適用された Grayscale 効果
色の再マッピング
Grayscale は、簡単に色修正を行えます。つまり、グレースケールの画像を任意の色に再マッピングできます。 ヒート ビジョンなどの効果を生み出すのに使用できます。
色を再マッピングする処理は、ColorCorrection 効果に非常に似ています。
- ゲーム内での通常のシーンのスクリーンショットを撮影します。
- Photoshop 出開いて、グレースケールに変えます。
- を使用して色補正を行います。
- を使用して、ダイアログから.acv ファイルを保存します。
- Photoshop で を開きます。
- ランプ画像に色補正を適用します。 をサイド開いて、保存した.acvファイルをロードします。
- Unity でカメラを選択し、 を選択して、効果を追加します。 修正した色ランプを選択します。
- 再生を押して、動作中の効果を確認します!
詳細
色ランプ画像 (サイズ: 256x1) を通じて、元の画像の色を再マッピングすると、色補正が機能します。
- result.color = ランプ画像 (OriginalLuminance + RampOffset) インデックスでのピクセルの色 例えば、画像の色を反転するには、元の色ランプを水平にひっくり返すだけです (黒から白になる代わりに、白から黒になるよう)。

白から黒に変わる色ランプのあるシーンに適用される Grayscale。
任意の色への修正を行う色再マッピングのより複雑なバージョンは、ColorCorrection 画像効果で達成できます。
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-13script-MotionBlur
Motion Blur 画像効果は、前にレンダリングしたフレームのモーション トレイルを残すことで、高速で移動するシーンを強化します。 より最新のMotion Blurの実装は、新しい Camera Motion Blur Effect を参照して下さい。
すべての image effects のように、Motion Blur は Unity Pro でのみ使用できます。 必ず、Pro Standard Assets をインストールしてください。

回転するシーンに適用される Motion Blur 効果
| Blur Amount | 画像内に残す前のフレームの量。 値が高くなるほど、モーション トレイルが長くなります。 |
| Extra Blur | チェックを入れると、前のフレームにさらにぶれを適用することで、モーション トレイルがよりぼやけます。 |
ヒント:
- Motion Blur は、時間ベースのため、再生モード中のみ機能します。
ハードウェア サポート
Motion Blur 効果はテクスチャへのレンダリングをサポートするすべてのグラフィック カードを機能させます。GeForce2、Radeon 7000 以降など。 すべての画像効果は、エンドユーザーのグラフィック カードで実行できないため、自動的に無効になります。
Page last updated: 2012-11-26script-NoiseEffect
Noiseはイメージ後処理効果のひとつで、TV、VCRの2種類をシミュレーション出来ます。
他のイメージエフェクト(image effects) 同様、Unity Proのみで提供されるエフェクトはUnity Proのみで、利用可能とするためにはプロスタンダードアセット(Pro Standard Assets )をあらかじめインポートする必要があります。

高い強度のNoiseエフェクトをシーンに適用した結果
| Monochrome | オンにすると、TVNoiseが適用されます。オフにするとVCRNoiseが適用され、YUVカラー空間で色を歪ませるため、色相変更が行われ多くの場合、マゼンダ・グリーンの色相に変換されます。 |
| Grain Intensity Min/Max | Noiseの粒密度の最大/最小の間でランダム値をとります |
| Grain Size | ひとつの粒テキスチャピクセルのサイズ。値を大きくすることで粒は大きくなります |
| Scratch Intensity Min/Max | スクラッチ・ダストの強度の最大/最小の間でランダム値をとります |
| Scratch FPS | スクラッチがスクリーン上の異なる場所に移動する際のフレームレートです |
| Scratch Jitter | スクラッチは元の配置から揺らぎをもって配置されます |
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-26script-SepiaToneEffect
Sepia Toneはイメージを古写真風に色づけをするイメージ効果です。
他のイメージエフェクト(image effects) 同様、エフェクト機能はUnity Proのみであり利用可能とするためにはプロスタンダードアセット(Pro Standard Assets )をあらかじめインポートする必要があります

シーンにSepia Toneイメージエフェクトを適用した例
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-26script-SSAOEffect
Screen Space Ambient Occlusion (SSAO)はアンビエントオクルージョン(Ambient Occlusion) をリアルタイムで擬似的に後処理イメージ効果として再現するものです。互いに接近している折り目、穴、地表を暗くする効果があります。現実世界においては、環境光(Ambient light)を遮られ光効果をオクルージョン(無視)するため暗く見えます。
他のイメージエフェクト同様、エフェクト機能(image effects) はUnity Proのみであり利用可能とするためにはプロスタンダードアセット(Pro Standard Assets )をあらかじめインポートする必要があります

SSAOをシーンに適用した例

比較のためSSAOなしでの同じシーンを描画。物体や植物が地面に接する付近の違いを見比べて下さい
プロパティ
| Radius | この半径より隣接距離がが短い場合アンビエントオクルージョンを適用します |
| Sample Count | アンビエントオクルージョンのサンプリング回数。より高い値にすることで品質が上がるが処理オーバヘッドの負荷も上がります |
| Occlusion Intensity | アンビエントオクルージョンが適用する暗さの度合い |
| Blur | 暗くする際に適用する滲み。滲みなし(値をゼロ)にすることで処理速度は上がりますが、暗くした場所のノイズが増えます |
| Downsampling | 計算を行う際の解像度(例えば値を2に設定した場合は画面解像度の半分)。より大きな値にすることでレンダリング処理の時間を短縮できますが、品質は低下します |
| Occlusion Attenuation | オクルージョンを距離に応じて減衰させる強度 |
| Min Z | 画像の乱れが発生する場合はこの値を増加して下さい |
詳細
SSAOはアンビエントオクルージョンを擬似的に後処理イメージ効果として再現します。処理コストは画面解像度に単純に依存し、SSAOのパラメータはシーンの複雑さの影響を受けません(現実世界ののアンビエントオクルージョンであれば複雑さも影響します)しかし擬似化により画像の乱れが発生しやすくなります。例えば、画面外のオブジェクトはオクルージョンに影響せず、オクルージョンの強度もビューアングルやカメラ位置に依存します。
SSAOは処理時間を相当に浪費しやすく、一般的にハイエンドのグラフィックカードのみで使用すべきです。SSAOを使用することでUnityはカメラのデプスおよびノーマルテクスチャをレンダリングさせることになりドローコール数を増加させてかつCPUオーバーヘッドがかかります。しかしデプスおよびノーマルテクスチャは他の効果でも代用できます(例としてDepth of Fieldなど)一度テクスチャが生成されるとSSAO効果の残りの処理はグラフィックスカード上で行われます。
ハードウェア要件
SSAO works on graphics cards with Shader Model 3.0 support (eg, GeForce 6 and later, Radeon X1300 and later). All image effects automatically disable themselves when they can not run on a particular graphics card. Due to the complexity of the effect, SSAO is not supported on mobile devices.
Page last updated: 2012-11-26script-TwirlEffect
Twirl 画像効果は、レンダリングされた画像を歪めます。中心付近のピクセルが特別な角度で回転します。つまりサークル内の他の画素の回転は、円の端でゼロに減少し、中心部からの距離に伴って減少します。
Twirl はもう一つの Vortex と呼ばれる画像効果に似てるけど, vortex distorts は一点というよりも中心円の周りの効果になっています。
すべての image effects のように、Twirl は Unity Pro でのみ使用できます。 必ず、Pro Standard Assets をインストールしてください。

Twirl 効果が適用されたシーン
| Radius | スクリーン座標で規格化されたねじれの半径 (例えば0.5ならばスクリーンの半分の大きさ). |
| Angle | 中心の回転角度 |
| Center | ねじれが起こる中心位置 |
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-13script-VortexEffect
Vortex 画像効果は、レンダリングされた画像を歪めます。 中心付近のピクセルは特別な角度で回転します。つまり 変位量は、円の端でゼロに減少し、中心部からの距離と共に減少します。 Vortexは Twirl と似ていますが、Twirl は円というよりもポイントを中心に画像を歪ませます。
すべての image effects のように、Vortex は Unity Pro でのみ使用できます。 必ず、Pro Standard Assets をインストールしてください。

シーンに適用された Vortex 効果
| Radius | 画像の歪みが発生する楕円の半径。 半径が 0.5 の場合、画面の半分のサイズになります。 |
| Angle | 適用される歪みの量を制御します。 |
| Center | 画面上に歪みエリアを配置します。 |
ハードウェアの対応サポート
エフェクトが有効となるためにはピクセルシェーダ(2.0)あるいはOpenGL ES2.0対応したグラフィックスカードが必要です。対応デバイスについてPCでは2003年以降のNVIDIAグラフィックスカード(GeForce FX)、2005年以降のAMDグラフィックスカード(Radeon 9500)、2005年以降のIntelグラフィックスカード(GMA 900)、モバイルではOpenGL ES2.0が必要、据え置き機ではXbox 360、PS3です。
全てのイメージ効果はエンドユーザのグラフィックスカードで動作しないと分かった場合、自動的に無効化されます。
Page last updated: 2012-11-13comp-ManagerGroup
- オーディオマネージャー
- エディタ設定
- 入力マネージャ
- ネットワーク マネージャ
- 物理特性マネージャ
- Player Settings
- 画質設定
- レンダー設定
- Script Execution Order(スクリプト実行順)設定
- タグ マネージャ
- タイム マネージャ
class-AudioManager
Audio Manager では、シーンで再生されているすべての音声の最大音量を微調整できます。 これを表示するには、 を選択します。

プロパティ
| Volume | 再生中のすべての音声の音量。 |
| Rolloff Scale | 対数 RollOff ベースのソースに対して、グローバル減衰ロールオフ係数を設定します(Audio Source を参照)。 この値が高いほど、音量の減衰速度が上がり、低いほど、減衰速度は下がります (1 の場合は、現実世界をシミュレートします)。 |
| Speed of Sound | 音声の速度。 単位にメートルを使用している場合、343 が現実世界での音声の速度になります。 この値を調整して、より速い速度、あるいはより遅い速度で、オブジェクトに対して、ドップラー効果の量を増やしたり、減らしたりできます。 |
| Doppler Factor | ドップラー効果の可聴量。 ゼロにすると、オフになります。 1 の場合は、高速で移動するオブジェクトの場合にかなり聞こえやすくなります。 |
| Default Speaker Mode | プロジェクトに対して、デフォルトにする必要のあるスピーカー モードを定義します デフォルトは、2で、ステレオ スピーカーNO設定になります (モードのリストに関しては、スクリプティング API リファレンスの [[ScriptRef:AudioSpeakerMode.html |AudioSpeakerMode] を参照してください)。 |
| DSP Buffer Size | DSPバッファのサイズは、レイテンシやパフォーマンスを最適化するために設定することができます |
| Default | デフォルトのバッファ·サイズ |
| Best Latency | レイテンシを良くするためパフォーマンスを犠牲にします |
| Good Latency | レイテンシとパフォーマンスのバランスをとります |
| Best Performance | パフォーマンスを良くするためレイテンシを犠牲にします |
詳細
ドップラー効果を使用したい場合は、「Doppler Factor」を 1 に設定します。次に「Speed of Sound」と「Doppler Factor」の両方を満足するまで微調整します。
スピーカー モードは、スクリプティングを通じて、アプリケーションからランタイムで変更できます。 Audio Settings を参照してください。
class-EditorManager

Properties
| Version Control | バージョン管理の使用モード |
| WWW Security Emulation | ウェブプレイヤー(Webplayer)でのテストの際、エディタが指定したURLでホスティングされているかのようシミュレーションできる |
| Asset Serialization | バージョン管理でのマージを支援するため、Unityはシーンのファイルをテキスト形式で保存することが出来る(詳細については テキストシーン形式 を参照のこと)。もしマージされない場合、Unityはシーン保存を容量節約できるバイナリ形式、あるいはバイナリ形式とテキスト形式を同時に共存させることも可能である。 |
class-InputManager
デスクトップ
Input Manager では、プロジェクトに対して、全ての異なる入力軸とゲームでの動作を定義します。

「入力マネージャ」
入力マネージャは、 を選択することで表示できます。
プロパティ
| Axes | 現在のプロジェクトに定義されたすべての入力軸を含みます。 「Size」は、このプロジェクトでの各入力軸の数で、「Element 0, 1, ...」は、修正する特定の軸になります。 |
| Name | ゲーム ランチャーにおける、およびスクリプティングを通じて軸を参照する文字列。 |
| Descriptive Name | ゲーム ランチャーに表示される「Positive Button」関数の詳細な定義。 |
| Descriptive Negative Name | ゲーム ランチャーに表示される「Negative Button」関数の詳細な定義。 |
| Negative Button | 軸に負の値を送るボタン。 |
| Positive Button | 軸に正の値を送るボタン。 |
| Alt Negative Button | 軸に負の値を送る別のボタン。 |
| Alt Positive Button | 軸に正の値を送る別のボタン。 |
| Gravity | 入力が再度中心になる速度。 「Type」が「key / mouse button」の場合にのみ使用されます。 |
| Dead | この数値未満の正または負の値はゼロとして登録されます。 ジョイスティックに便利です。 |
| Sensitivity | キーボード入力の場合、この値を大きくすると、応答時間が速くなります。 値が低いと、より滑らかになります。 マウス デルタの場合、この値によって実際のマウス デルタが縮小拡大されます。 |
| Snap | 有効にすると、反対の入力を受け取った直後に、軸値がゼロにリセットされます。 「Type」が「key / mouse button」の場合にのみ使用されます。 |
| Invert |有効にすると、、正のボタンが軸に負の値を渡し、その逆も行われます。 | |
| Type | 何らかのボタンに「Key / Mouse Button」、マウス デルタおよびスクロール ホイールに「Mouse Movement」、アナログ ジョイスティックに「Joystick Axis」およびユーザーがウィンドウを揺すった場合に「Window Movement」をそれぞれ使用します。 |
| Axis | 機器からの入力の軸 (ジョイスティック、マウス、ゲームパッドなど)。 |
| Joy Num | 使用する必要のあるジョイスティック。 デフォルトでは、すべてのジョイスティックからの入力を回収します。 これは、入力軸のみに使用され、ボタンには使用されません。 |
詳細
入力マネージャで設定した軸にはすべて、次の 2 つの目的があります。
- スクリプティングで軸名によって入力を参照できるようにする。
- ゲームのプレイヤーが制御を自分の好みにカスタマイズできるようにする。
定義された軸はすべてゲーム ランチャーでプレイヤーに表示されます。ここでは、その名前、詳細な説明、デフォルトのボタンが表示されます。 ここから、軸で定義されたボタンを変更するためのオプションが表示されます。 そのため、プレイヤーがゲームのためにボタンをカスタマイズしたいように、個々のボタンの代わりに、軸を使用するようスクリプトを記述するのがベストです。

「ゲーム ランチャーの入力をウィンドウは、ゲーム実行時に表示されます」
以下も併せて参照してください。 Input
ヒント
- Axesは、ゲーム ランチャーでプレイヤーにはっきりと表示されるため、表示したくない、隠し機能を置く最適な場所ではありません。

iOS
本項は、iOS 機器でサポートされていません。
iOS 機器の入力の取り扱い方の詳細については、iOS Input ページを参照してください。

Android
本項は、Android 機器でサポートされていません。
Android 機器の入力の取り扱い方の詳細については、Android Input ページを参照してください。
class-NavMeshLayers
ナビメッシュレイヤー(Unity Proのみ)
ナビゲーションシステムの役割はナビゲーションする空間上で最適な経路を探索することです。もっとも初歩的な例では最短距離となります。しかしより高度なシーンにおいては渡りにくさが異なる経路(マス)が存在することがあります(例えば川を渡ることは橋をわたるよりも移動コストが高い、など)。この状況をモデル化するために、Unityでは「コスト」という概念を活用し「最適経路」はもっとも低コストの経路と定義しています。コストの異なる空間を管理するためにUnityはナビメッシュレイヤという概念を持っています。ナビメッシュスタティック(Navmesh Static)としてマークされたオブジェクトはナビメッシュレイヤに属することになります。
経路探索の際、経路の長さを比較するのではなくコストの高さで評価を行います。この計算を行うために経路をセグメントに分け、各セグメントのナビメッシュレイヤのコストを経路の長さに乗じます。すべてのコストが1にセットされると、最適経路が最短経路と等価であることに注意してください。
プロジェクトごとのカスタムレイヤーを定義するには
- メニューの「Edit」→「Project Settings」→「Navmesh Layers」を選択します
- ユーザレイヤーのどれかへ移動し名前(Name)とコスト(Cost)をセットアップします
- 名前(Name)でシーン上のナビメッシュレイヤを一意に識別します。
- コスト(Cost)でナビメッシュレイヤを移動する難易度を示します。デフォルト値は1、倍の難易度は2、半分の難易度は0.5などと設定します。
- 3種類のビルトインのレイヤがあります
- Default 指定がなかった場合のデフォルトのコスト
- Not Walkable 移動できないため空間のためコスト無視
- Jump 自動生成されたオフメッシュリンク(Off-mesh link)のコスト
カスタムのレイヤーを特定の物体に適用するには
- エディタ上で物体を選択します
- ナビゲーションメッシュウィンドウを開きます(メニューの「Window」→「Navigation」を選択します)
- オブジェクト(Object)タブに移動し、適用したいナビゲーションレイヤを選択します
Navmesh DisplayウィンドウにてShow NavMeshをオンにした場合、エディタ上で各々のレイヤは異なる色で表示されます
エージェントが移動できるレイヤーを識別するためには:
- エージェントの物体にアタッチされているナビメッシュエージェントコンポーネントを開きます
- NavMesh Walkable属性を変更します
- スクリプトからエージェントの終点(destination)を必ず設定します
注意:コストを1以下の値は推奨しません。この場合、経路探索により最適経路がみつかることが保証されません
ナビメッシュレイヤーを活用した一つの分かりやすい使用例は、次のとおりです。
- 歩行者(エージェント)が渡らないといけない道路があります
- 道路の真ん中にある歩道が歩行者にとってもっと望ましい経路です
- 道路用のナビメッシュレイヤをセットアップし、コストは高い値、同様に歩道用のナビメッシュレイヤをセットアップし、コストは低い値とします
- これによりエージェントは歩道を好んで移動するようになります
高度な経路探索に関連したトピックとしては、オフメッシュリンク(Off-mesh link)を参照して下さい。
(ナビゲーションと経路探索 に戻る)
Page last updated: 2012-11-11class-NetworkManager
Network Manager には、ネットワーク マルチプレイヤーのゲームを作成するための2 つの重要なプロパティが含まれています。

「ネットワーク マネージャ」
ネットワーク マネージャにアクセスするには、メニューバーから を選択します。
プロパティ
| Debug Level | コンソールに表示されたメッセージのレベル。 |
| Off | エラーのみ表示されます。 |
| Informational | 重要なネットワーク イベントが表示されます。 |
| Full | すべてのネットワーク イベントが表示されます。 |
| Sendrate | ネットワーク上でデータが送信される秒ごとの回数。 |
詳細
Debug Level の調整は、ゲームのネットワーク上での動作を微調整およびデバッグする上で非常に便利です。 最初に。これを「Full」に設定すると、実行されるすべてのネットワーク動作を確認できます。 これにより、ネットワーク通信を使用する頻度や、結果として使用する帯域幅の量を全体的に確認できます。
「Informational」に設定すると、大きいイベントを確認できますが、1 つ 1 つの行動は確認できません。 一意の「Network ID」を割り当て、「RPC」呼び出しをバッファリングすると、ここでログに残ります。
「Off」にすると、ネットワークからのエラーのみがコンソールに表示されます。
「Sendrate」間隔 (1 秒 / 「Sendrate」 = 間隔) で送信されるデータは、各放送オブジェクトの Network View プロパティに基づいて変わります。 ネットワーク ビューが「Unreliable」を使用している場合、そのデータは各間隔で送信されます。 ネットワーク ビューが「Reliable Delta Compressed」を使用している場合、Unity は監視されているオブジェクトが最後の間隔以降に変更されてたかをチェックします。 変更されている場合、データが送信されます。
Page last updated: 2012-11-11class-PhysicsManager
Physics Manager にアクセスするには、メニューバーから を選択します。

「物理特性マネージャ」
プロパティ
| Gravity | Rigidbody に適用される重力量。 通常重力は Y 軸にのみ働きます (負が下)。 重力の単位は、メートル / (秒 ^2) です。 |
| Default Material | 個々の Collider に何も割り当てられていない場合に使用されるデフォルトの Physic Material。 |
| Bounce Threshold | 相対速度がこの値未満の 2 つの衝突するオブジェクトは跳ね返りません。 この値により、ジッタも減るため、低すぎる値を設定することはお勧めしません。 |
| Sleep Velocity | デフォルトの直線速度で、これを下回ると、オブジェクトがスリープに入り始めます。 |
| Sleep Angular Velocity | デフォルトの角速度で、これを下回ると、オブジェクトがスリープに入り始めます。 |
| Max Angular Velocity | リジッドボディに対して許可されたデフォルトの最大角速度。 リジッドボディの角速度は、高速で回転するボディとの多くの不安定さを避けるため、「Max Angular Velocity」内に留まるよう固定されます。 これにより、車輪などのオブジェクトでの意図的な高速回転を回避することができるので、「Rigidbody.maxAngularVelocity」を記述することで、リジッドボディに対してこの値を無効にすることができます。 |
| Min Penetration For Penalty | 衝突ソルバーが 2 つのオブジェクトを離す前に、これらのオブジェクトが貫通できる深さ (メートル)。 値が高いほど、オブジェクトの貫通度合いは増しますが、ジッターは減ります。 |
| Solver Iteration Count | ジョイントおよび接点が解決される精度を定義します。 通常、7 の値にすると、ほぼすべての状況にうまく機能します。 |
| Raycasts Hit Triggers | 有効にすると、トリガーと付けられたコライダと交差するレイキャストがヒットを返します。 無効にすると、これらの交差はヒットを返しません。 |
| Layer Collision Matrix | layer-based collision 検出システムの動作を定義します。 |
詳細
物理特性マネージャでは、ワールドのデフォルトの特性を定義します。 Rigidbody Sleeping の説明に関しては、sleeping に関するこのページを参照してください。
ヒント
- 連結したボディの周期的な振動や、不安定な動作に問題がある場合は、「Solver Iteration Count」に高い値を設定すると、安定性が向上しますが、より高い処理能力が必要となります。
class-PlayerSettings40
Unityで構築するゲームの最終版のために様々なパラメータを(プラットフォーム固有の)定義する場所がPlayer Settings(プレイヤー設定)です。例えば、これらの値の一部は、スタンドアロンゲームを開いたときに起動するResolution Dialogで使用されているものもあれば、XcodeでiOSデバイス用のゲームを構築する際に使用されるものもあるので、それらを正しく記入することが重要です。
Player Settingsを見るにはメニューでを選択します。

作成した全てのプロジェクトに適用されるグローバル設定'
| Cross-Platform Properties | |
|---|---|
| Company Name | 会社の名前設定ファイルのロケーションとして使用 |
| Product Name | ゲーム実行時にメニューバーに表示される名前であり、設定ファイルのロケーションとして使用。 |
| Default Icon | 全てのプラットフォームでアプリケーションが使用するデフォルトのアイコン(プラットフォーム固有のニーズに合わせて後でこれを上書きすることができます) |
| Default Cursor | サポートされるすべてのプラットフォーム上でアプリケーションが使用するデフォルトのカーソル。 |
| Cursor Hotspot | カーソルのホットスポット位置をデフォルトのカーソルの左上隅からピクセル単位で指定 |
Per-Platform Settings

Desktop
Web Player
Resolution And Presentation(解像度およびスクリーン)

| Resolution | |
| Default Screen Width | 生成されるWeb Playerのスクリーン幅 |
| Default Screen Height | 生成されるWeb Playerのスクリーン高さ |
| Run in background | Web Playerがフォーカスを失った場合もゲームの実行を止めたくない場合にチェック |
| WebPlayer Template | 詳細については"Using WebPlayer templates page" をチェックする必要があります、本項では各々の内臓、カスタムのテンプレートについては、アイコンで示しています。 |
Icon(アイコン)

アイコンはWebPlayerビルドでは無効(Player Settingsの各ネイティブ クライアント ビルドのセクションでアイコンを設定できます)
Other Settings

| Rendering | |
| Rendering Path | このプロパティは、スタンドアロンおよびWebPlayerコンテンツ間で共有されます。 |
| Vertex Lit | ライティング再現性はもっとも低く、シャドウはサポートしていない。古いマシンや限られたモバイルプラットフォーム上での使用に最適。 |
| Forward with Shaders | ライティング機能のサポートは良い、シャドウはサポートは限定的 |
| Deferred Lighting | ライティングとシャドウ機能は、最良のサポートを提供するものの、ハードウェアで一定水準のサポートが必要。多くのリアルタイムのライトがある場合に最適。Unity Proのみ。 |
| Color Space | レンダリングに使用する色空間 |
| GammaSpace Rendering | レンダリングのガンマ補正 |
| Linear Rendering Hardware Sampling | レンダリングを線形空間で実行 |
| Use Direct3D 11 | レンダリングにDirect3D 11を使用。 |
| Static Batching | ビルドでStatic Batchを使用する場合に設定(WebPlayerではデフォルトで無効))。Unity Proのみ。 |
| Dynamic Batching | ビルドでDynamic Batchを使用する場合に設定(デフォルトで有効) 。 |
| Streaming | |
| First Streamed Level | Streamed Web Playerを公開する場合、Resources.Loadアセットにアクセスできる最初のレベルのインデックス |
| Configuration | |
| Scripting Define Symbols | カスタム コンパイル フラグ(詳細はplatform dependent compilationのページ を参照)。 |
| Optimization | |
| Optimize Mesh Data | メッシュについて、それが適用されたマテリアルが必要としていないデータを全て取り除く(接線、法線、色、UV) |
Standalone(スタンドアロン)
''Resolution And Presentation'(解像度およびスクリーン)'

| Resolution | |
| Default Screen Width | スタンドアロン ゲームがデフォルトで使用するスクリーン幅。 |
| Default Screen Height | Playerががデフォルトで使用するスクリーンの高さ。 |
| Run in background | フォーカスを失った場合もゲームの実行を止めたくない場合にチェック。 |
| Standalone Player Options | |
| Default is Full Screen | ゲーム起動時にデフォルトでフルスクリーンモードとしたい場合、これをチェックします。 |
| Capture Single Screen | オンにした場合、スタンドアロンゲームはフルスクリーンモードのゲームはマルチモニターの設定において2つめのモニタを暗くしません。 |
| DisplayResolution Dialog | |
| Disabled | ゲーム開始時にResolution (解像度)ダイアログを表示しない。 |
| Enabled | ゲーム開始時にResolution (解像度)ダイアログを表示。 |
| Hidden by default | Resolution(解像度)Playerを、ゲーム起動時に "Alt"キーを押した場合のみ表示。 |
| Use Player Log | デバッグ情報を含むログを書きこむ。Mac App Storeに申請を提出する予定がある場合は、このオプションのチェック解除しておきます。デフォルトはチェックがオンになってます。 |
| Resizable Window | スタンドアロンのFlash Playerウィンドウサイズをユーザーで変更できるようにします。 |
| Mac App Store Validation | MacのApp StoreのReceipt Validationを有効化。 |
| Mac Fullscreen Mode | Macビルドでフルスクリーンモードのオプション。 |
| Capture Display | Unityがディスプレイ全体をコントロール(すなわち、他のアプリでGUIが表示されず、ユーザーはフルスクリーンモードを終了するまでアプリを切り替えることはできません)。 |
| Fullscreen Window | Unityが画面全体を覆うデスクトップの解像度でウィンドウ実行します。他のアプリのGUIは正しく表示され、OSX 10.7以上でCmd +Tabまたはトラックパッドのジェスチャーでアプリを切り替えることが可能。 |
| Fullscreen Window with Menu Bar and Dock | フルスクリーンウィンドウモードと同様だが、標準のメニューバーとDockにも表示。 |
| Supported Aspect Ratios | Resolutionダイアログで選択できるアスペクト比は、このリストで有効なアイテムであり、モニターがサポートする解像度になります。 |
Icon(アイコン)

| Override for Standalone | スタンドアロンのゲームに使用したいカスタムアイコンを割り当る場合はオンにします。異なるサイズのアイコンを、以下の箱の中に収めます。 |
Splash Image(スプラッシュ画像)

| Config Dialog Banner | ゲーム開始時に表示されるカスタムのスプラッシュ画像を追加。 |
''他の設定

| Rendering | |
| Rendering Path | このプロパティは、スタンドアロンおよびWebPlayerコンテンツ間で共有されます。 |
| Vertex Lit | ライティング再現性はもっとも低く、シャドウはサポートしていない。古いマシンや限られたモバイルプラットフォーム上での使用に最適。 |
| Forward with Shaders | ライティング機能のサポートは良い、シャドウのサポートは限定的 |
ライティングとシャドウ機能は、最良のサポートを提供するものの、ハードウェアで一定水準のサポートが必要。 多くのリアルタイムのライトがある場合に最適。Unity Proのみ。||
| Color Space | レンダリングに使用する色空間。 | |
| GammaSpace Rendering | レンダリングのガンマ補正。 | |
| Linear Rendering Hardware Sampling | レンダリングを線形空間で実行 | |
| Static Batching | ビルドでStatic Batchを使用する場合に設定(WebPlayerではデフォルトで無効))。Unity Proのみ。 | |
| Dynamic Batching | ビルドでDynamic Batchを使用する場合に設定(デフォルトで有効) 。 | |
| Configuration | ||
| Scripting Define Symbols | カスタム コンパイル フラグ(詳細はplatform dependent compilationのページ を参照)。 | |
| Optimization | ||
| API Compatibility Level | ||
| .Net 2.0 | .Net 2.0ライブラリ。.Net互換性が最大、ファイルサイズ最大。 | |
| .Net 2.0 Subset | .NET互換性は全体の一部、ファイルサイズは小さく。 | |
| Optimize Mesh Data | メッシュについて、それが適用されたマテリアルが必要としていないデータを全て取り除く(接線、法線、色、UV) | |

iOS
Resolution And Presentation(解像度およびスクリーン)

| Resolution | |
| Default Orientation | (このプロパティは、iOSおよびAndroid間で共有されます。) |
| Portrait | デバイスはPortraitモード、持ち方は縦向きでホームボタンが下。 |
| Portrait Upside Down | デバイスはPortraitモードで逆さま、持ち方は縦向きでホームボタンが上。 |
| Landscape Right | デバイスはLandscapeモード、持ち方は横向きでホームボタンが左。 |
| Landscape Left | デバイスはLandscapeモード、持ち方は横向きでホームボタンが右。 |
| Auto Rotation | 画面の向きが自動的に物理デバイスの向きに基づいて設定されます。 |
| Auto Rotation settings | |
| Use Animated Autorotation | チェックがオンの場合、向きの変更が適用される。Default orientationが Auto Rotation.の場合のみ適用。 |
| Auto Rotationで許容される解像度 | |
| Portrait | オンのときはPortraitモードを許可。Default OrientationがAuto Rotationに設定されている場合にのみ適用。 |
| Portrait Upside Down | オンのときはPortraitモード(逆さま)を許可。デフォルトの向きはDefault OrientationがAuto Rotationに設定されている場合にのみ適用。 |
| Landscape Right | オンのときはLandscapeモード(ホームボタンが左)を許可。Default orientationが Auto Rotation.の場合のみ適用。 |
| Landscape Left | オンのときはLandscapeモード(ホームボタンが左)を許可。Default OrientationがAuto Rotationに設定されている場合にのみ適用。 |
| Status Bar | |
| Status Bar Hidden | アプリケーションの起動時にステータスバーが最初に隠されているかどうかを指定します。 |
| Status Bar Style | アプリケーションの起動時の、ステータスバーのスタイルを指定します |
| Default | |
| Black Translucent | |
| Black Opaque | |
| Use 32-bit Display Buffer | 32ビットカラー値を保持するためにディスプレイバッファを作成するよう指定(デフォルトでは16ビット)。使用するのは、バンディングが見られる場合や、ImageEffects(画像効果)でアルファを必要とする場合であり、理由はディスプレイバッファと同じ形式にRTを作成するためです。 |
| Show Loading Indicator | ローディング インジケータのオプション |
| Don't Show | インジケータなし。 |
| White Large | 白色で大きいインジケータを表示。 |
| White | 白色で通常の大きさのインジケータを表示。 |
| Gray | 灰色で通常サイズのインジケータを表示。 |
Icon(アイコン)

| Override for iOS | iPhone / iPadのゲームに使用したいカスタムアイコンを割り当てる場合オンにします。異なるサイズのアイコンを、以下の箱の中に収めます。 |
| Prerendered icon | オフの場合iOSはアプリケーションアイコンに光沢やベベル効果を適用します。 |
Splash Image(スプラッシュ画像)

| Mobile Splash Screen (Unity Proのみ) | iOSのスプラッシュ画面に使用されるべきテクスチャを指定。標準のスプラッシュ画面サイズは320×480になります。(このプロパティは、iOSおよびAndroid間で共有されます。)| | |
| High Res. iPhone (Unity Proのみ) | iOSの第四世代デバイスのスプラッシュ画面に使用されるべきテクスチャを指定。スプラッシュ画面サイズは640x960。 |
| iPad Portrait (Unity Proのみ) | iPadの縦向きのスプラッシュ画面として使用されるべきテクスチャーを指定。標準のスプラッシュ画面サイズは768x1024。 |
| High Res. iPad Portrait | iPadの縦向きの高解像度スプラッシュ画面として使用されるべきテクスチャーを指定。標準のスプラッシュ画面サイズは1536x2048。 |
| iPad Landscape (Unity Proのみ) | iPadの横向きのスプラッシュ画面として使用されるべきテクスチャーを指定。標準のスプラッシュ画面サイズは1024x768。 |
| High res. iPad Landscape (Unity Proのみ) | iPad横向きの高解像度スプラッシュ画面として使用されるべきであるテクスチャーを指定。標準のスプラッシュ画面サイズは2048×1536。 |
Other Settings(他の設定)

レンダリング
| Static Batching | ビルドでStatic Batchを使用する場合に設定(デフォルトで有効))。Unity Proのみ。 | |
| Dynamic Batching | ビルドでDynamic Batchを使用する場合に設定(デフォルトで有効)。 | |
| Identification | ||
| Bundle Identifier | お使いのApple Developer Networkのアカウントからプロビジョニング証明書で使用される文字列(これはiOSとAndroidの間で共有されます)。 | |
| Bundle Version | バンドルのビルドバージョン番号を指定、ビルドバージョンが上がったことを示す(リリースされたかどうかにかかわらず)。単調に増加する、ピリオドで区切られた一つ以上の数字。 | |
| Configuration | ||
| Target Device | アプリケーションのターゲット デバイスの種類を指定。 | |
| iPhone Only | アプリケーションのターゲット デバイスをiPhoneのみとします。 | |
| iPad Only | アプリケーションのターゲット デバイスをiPadのみとします。 | |
| iPhone + iPad | アプリケーションのターゲット デバイスをiPadおよびiPhoneとします。 | |
| Target Resolution | デプロイしたデバイスで使用したい解像度。(この設定は、480×320の最大解像度を持つデバイスには何の影響もありません。 | |
| Native(Default Device Resolution) | デバイスのネイティブ解像度を使用します。 | |
| Auto (Best Performance) | 解像度を自動選択し、グラフィック品質より性能を重視。 | |
| Auto (Best Quality) | 解像度を自動選択し、性能よりグラフィック品質を重視。 | |
| 320p (iPhone) | Retina以前 iPhone ディスプレイ | |
| 640p (iPhone Retina Display) | iPhone Retinaディスプレイ | |
| 768p (iPad) | iPadディスプレイ。 | |
| Graphics Level | OpenGLバージョン。 | |
| OpenGL ES 1.x | OpenGL ES 1.xバージョン。 | |
| OpenGL ES 2.0 | OpenGL ES 2.0。 | |
| Accelerometer Frequency | 加速度計のサンプリング頻度。 | |
| Disabled | 加速度はサンプリングされません。 | |
| 15Hz | 毎秒15サンプル。 | |
| 30Hz | 毎秒30サンプル。 | |
| 60Hz | 毎秒60サンプル。 | |
| 100Hz | 毎秒100サンプル。 | |
| Override iPod Music | オンの場合、アプリケーションはユーザーのiPodの音楽を消音します。オフの場合、ユーザーのiPodの音楽はバックグラウンドで再生され続けます。 | |
| UI Requires Persistent WiFi | アプリケーションで、Wi-Fi接続が必要かどうかを指定。アプリケーションの実行中にiOSがアクティブなWi-Fi接続を切断せずに維持します。 | |
| Exit on Suspend | マルチタスクをサポートするiOSのバージョンの場合、バックグラウンドにサスペンドされた際にアプリケーションが終了するかを指定。 | |
| Scripting Define Symbols | カスタム コンパイル フラグ(詳細はプラットフォーム依存のコンパイル を参照)。 | |
| Optimization(最適化) | ||
| Api Compatibility Level | アクティブ.NET APIのプロフィールを指定。 | |
| .Net 2.0 | .Net 2.0ライブラリ。.Net互換性が最大、ファイルサイズ最大。 | |
| .Net 2.0 Subset | .NET互換性は全体の一部、ファイルサイズは小さく。 | |
| AOT compilation options | 追加のAOTコンパイラ オプション。 | |
| SDK Version | XcodeでビルドするiPhone OSのSDKバージョンを指定 | |
| Device SDK | 実際のハードウェア上で実行するためのSDK。 | |
| Simulator SDK | シミュレータ上でのみ実行するためのSDK。 | |
| Target iOS Version | 最終的なアプリケーションを実行することができる最も古いiOSのバージョンを指定し、iOS4.0-6.0の範囲。 | |
| Stripping Level (Unity Proのみ) | ビルドされたプレーヤーのファイル容量を小さくするためスクリプト機能の一部を削減するオプション(この設定はiOSとAndroidプラットフォームの間で共有されます) | |
| Disabled | 削減は行われません。 | |
| Strip Assemblies | レベル1の削減。 | |
| Strip ByteCode | レベル2の削減(レベル1からの削減を含む)。 | |
| Use micro mscorlib | レベル3の削減(レベル1、2からの削減を含む)。 | |
| Script Call Optimization | 実行時に速度向上のために例外処理を無効にするオプション。 | |
| Slow and Safe | 完全な例外処理がデバイス上で行われ、若干パフォーマンスに影響します。 | |
| Fast but no Exceptions | デバイス上の例外データが提供されず、ゲーム実行速度を高めます。 | |
| Optimize Mesh Data | メッシュについて、それが適用されたマテリアルが必要としていないデータを全て取り除く(接線、法線、色、UV)。 | |
注意: 例えば、iPhone OS 3.2用にビルドし、Xcode上でSimulator 3.2を選択した場合、エラーが大量に発生します。Unityエディタで必ず 適切なターゲットSDKを選択してください。

Android
Resolution And Presentation(解像度およびスクリーン)

''Androidビルドのプロジェクト向けの解像度とスクリーン'
| Resolution | |
| Default Orientation | (このプロパティは、iOSおよびAndroid間で共有されます。) |
| Portrait | デバイスはPortraitモード、持ち方は縦向きでホームボタンが下。 |
| Portrait Upside Down | デバイスはPortraitモードで逆さま、持ち方は縦向きでホームボタンが上。 |
| Landscape Right | デバイスはLandscapeモード、持ち方は横向きでホームボタンが左。 |
| Landscape Left | デバイスはLandscapeモード、持ち方は横向きでホームボタンが右。 |
| Use 32-bit Display Buffer | ディスプレイバッファが32ビットカラー値(デフォルトでは16ビット)を保持するために作成するかを指定。。使用するのは、バンディングが見られる場合や、ImageEffects(画像効果)でアルファを必要とする場合であり、理由はディスプレイバッファと同じ形式にRTを作成するためです。Gingerbread以前のOSではサポートされてません(強制的に16ビットになります)。 |
| Use 24-bit Depth Buffer | (少なくとも)24ビットカラー値を保持するためディスプレイバッファを作成するよう指定。パフォーマンスに影響を及ぼす可能性があるので、'z-fighting'やその他の画像の乱れがある場合のみ使用して下さい。 |
| Icon(アイコン) | |
|---|---|

プロジェクトをビルドしたとき保持するさまざまなアイコン。
| Override for Android | Androidゲームで使用したいカスタムアイコンを割り当てる場合、オンにします。異なるサイズのアイコンを、以下の箱の中に収めます。 |
| Splash Image(スプラッシュ画像) | |
|---|---|

プロジェクト起動時に表示されるスプラッシュ画像。
| Mobile Splash Screen (Unity Proのみ) | iOSのスプラッシュ画面に使用されるべきテクスチャを指定。標準のスプラッシュ画面サイズは320×480になります。(これは、AndroidとiOSの間で共有されます) |
| Splash Scaling | デバイス上のスプラッシュ画像の拡大・縮小の方法を指定します。 |
| Other Settings(他の設定) | |
|---|---|

| Rendering | |||
| Static Batching | ビルドでStatic Batchを使用する場合に設定(WebPlayerではデフォルトで無効)。Unity Proのみ。 | ||
| Dynamic Batching | ビルドでDynamic Batchを使用する場合に設定(デフォルトで有効) 。 | ||
| Identification | |||
| Bundle Identifier | お使いのApple Developer Networkのアカウントからプロビジョニング証明書で使用される文字列(これはiOSとAndroidの間で共有されます)。 | ||
| Bundle Version | バンドルのビルドバージョン番号を指定、ビルドバージョンが上がったことを示す(リリースされたかどうかにかかわらず)。単調に増加する、ピリオドで区切られた一つ以上の数字。(これは、iOSとAndroidの間で共有されます) | ||
| Bundle Version Code | 内部バージョン番号。この番号はひとつのバージョンが、別のバージョンより新しいかどうかを判断するための数字で、高いほうが新しいです。ユーザーに表示するバージョン番号ではなく、その番号はversionName属性によって設定されます。値は "100"のように、整数として設定しなければなりません。次のバージョンが、より高い数値であるかぎり好きに定義することができます。例えば、ビルド番号でも問題ありません。あるいは"x.y"の形式として、でバージョン番号または、下限と上限の16ビットで個別に "x"と "y"をエンコードし、整数に変換することができます。それとも、単に新しいバージョンをリリースするたびに、1つ数を増やすことができます。 | ||
| Minimum API Level | ビルドをサポートするのに最低限必要なAPIの最小バージョン。 | ||
| Configuration | |||
| Graphics Level | ES 1.1( "固定機能")またはES 2.0( 'シェーダベース')のOpen GLバージョンのいずれかを選択します。AVD(エミュレータ)を使用する場合はES 1.xのみサポートされています。 | ||
| Install Location | アプリケーションをデバイス上にインストールするロケーションを指定(詳細については、http://developer.android.com/guide/appendix/install-location.htmlを参照)。 | ||
| Automatic | OSで自動判断。ユーザーが後からアプリを相互に移動することができます。 | ||
| Prefer External | 可能であれば外部ストレージ(SDカード)にアプリをインストールします。OSでは保証されないため、出来ない場合はアプリは内部メモリにインストールされます。 | ||
| Force Internal | 強制的に内部メモリにアプリをインストールします。ユーザーはアプリを外部ストレージに移動することができません。 | ||
| Internet Access | Requireにすると、スクリプトがこれを使用していない場合でも、ネットワークのアクセス許可が有効になります。開発ビルドでは、自動的に有効化されます。 | ||
| Write Access | External (SDCard)に設定すると、SDカードなどの外部記憶装置への書き込みアクセスを可能にします。開発ビルドでは、自動的に有効化されます。 | ||
| Scripting Define Symbols | カスタム コンパイル フラグ(詳細はプラットフォーム依存のコンパイル を参照)。 | ||
| Optimization | |||
| Api Compatibility Level | アクティブな.NET API プロファイルを指定。 | ||
| .Net 2.0 | .Net 2.0ライブラリ。.Net互換性が最大、ファイルサイズ最大。 | ||
| .Net 2.0 Subset | .NET互換性は全体の一部、ファイルサイズは小さく。 | ||
| Stripping Level (Unity Proのみ) | ビルドされたプレーヤーのファイル容量を小さくするためスクリプト機能の一部を削減するオプション(この設定はiOSとAndroidプラットフォームの間で共有されます) | ||
| Disabled | 削減は行われません。 | ||
| Strip Assemblies | レベル1の削減。 | ||
| Strip ByteCode | レベル2の削減(レベル1からの削減を含む)。 | ||
| Use micro mscorlib | レベル3の削減(レベル1、2からの削減を含む)。 | ||
| Enable "logcat" profiler | プロジェクトのテスト時に、デバイスからフィードバックを取得したい場合はこれを有効にしてください。adbのlogcatを、デバイスからコンソール(開発ビルドでのみ使用可能)にログを出力します。 | ||
| Optimize Mesh Data | メッシュについて、それが適用されたマテリアルが必要としていないデータを全て取り除く(接線、法線、色、UV)。 | ||
| Publishing Settings(公開設定) |
|---|

Androidマーケット向けの公開設定
| Keystore | |
| Use Existing Keystore / Create New Keystore | 新しいキーストアを作成するか、既存のものを使用するか、選択するために使用します。 |
| Browse Keystore | 既存のキーストアを選択します。 |
| Keystore password | キーストアのパスワード。 |
| Confirm password | パスワード確認、Create New Keystoreオプションが選択された場合にのみ有効。 |
| Key | |
| Alias | キーのエイリアス。 |
| Password | キーエイリアスのパスワード。 |
| Split Application Binary | アプリケーションをExpansion Fileに分割するためのフラグ。Google Playストアで最終ビルドが50MB超えたときにかぎり便利です。 |
セキュリティ上の理由から、Unityがキーストアのパスワードも、キーのパスワードも、保存しないことに注意してください。また、署名はUnityのプレーヤーの設定から行う必要がありますので注意下さい、jarsignerを使用した場合は機能しません。
Flash
Resolution And Presentation(解像度とスクリーン)

| Resolution | |
| Default Screen Width | 生成されるPlayerのスクリーン幅。 |
| Default Screen Height | 生成されるPlayerのスクリーン高さ。 |
Other Settings

| Optimization | |
| Stripping | ビルドの際にバイトコードを削減するオプションです。 |
| Strip Physics Code | 必要でない場合に物理エンジンのコードを削減します。 |
| Optimize Mesh Data | メッシュについて、それが適用されたマテリアルが必要としていないデータを全て取り除く(接線、法線、色、UV) |
Google Native Client
Resolution and Presentation(解像度とスクリーン)

| Resolution | |
| Default Screen Width | 生成されるPlayerのスクリーン幅。 |
| Default Screen Height | 生成されるPlayerのスクリーン高さ。 |
Icon

プロジェクトをビルドしたとき保持するさまざまなアイコン。''
| Override for Web | ネイティブクライアントのゲームで使用したいカスタムアイコンを割り当てる場合、オンにします。異なるサイズのアイコンを、以下の箱の中に収めます。 | |
Other Settings

| Rendering | |
| Static Batching | ビルドでStatic Batchを使用する場合に設定(WebPlayerではデフォルトで無効))。Unity Proのみ。 |
| Dynamic Batching | ビルドでDynamic Batchを使用する場合に設定(デフォルトで有効) 。 |
| Configuration | |
| Scripting Define Symbols | カスタム コンパイル フラグ(詳細はplatform dependent compilationのページ を参照)。 |
| Optimization | |
| API Compatibility Level | |
| .Net 2.0 | .Net 2.0ライブラリ。.Net互換性が最大、ファイルサイズ最大。 |
| .Net 2.0 Subset | .NET互換性は全体の一部、ファイルサイズは小さく。 |
| Strip Physics Code | 必要でない場合に物理エンジンのコードを削減します。 |
| Optimize Mesh Data | メッシュについて、それが適用されたマテリアルが必要としていないデータを全て取り除く(接線、法線、色、UV)。 |
Details

Desktop
Player Settingsウィンドウで、多くの技術的な設定のデフォルト値がセットされています。Quality Settings も参照して、さまざまなグラフィックの品質レベルを設定できることを確認下さい。
WebPlayerの公開
Default Web Screen WidthおよびDefault Web Screen Heightで、htmlファイルが使用するサイズを決定します。後からhtmlファイルでサイズを変更できます。
Default Screen WidthおよびDefault Screen Height は、Web Player実行時にコンテキストメニューからフルスクリーンモードに入るときに、Web Playerで使用されます。
Resolution(解像度)ダイアログのカスタマイズ

エンドユーザに表示されるResolution(解像度)ダイアログ
スタンドアロン プレーヤーのScreen Resolution(画面解像度)ダイアログにカスタムバナー画像を追加するオプションがあります。画像の最大サイズは432×163ピクセルです。画像は、画面セレクタに合わせて拡大されません。代わりに、センタリングしてトリミングされます。
MacのApp Storeへの公開
Use Player Log(プレイヤーログの使用)によりデバッグ情報を含むログファイルの書き込みを有効にします。これはゲームに問題がある場合に、何が起こったかを調べるのに便利です。AppleのMac App Storeのゲームを公開するとき、これをオフにすることを推奨します、そうしなければAppleが提出を拒否することがあります。ログファイルの詳細についてはthis manual page を参照下さい。
Use Mac App Store ValidationでMacのApp StoreのReceipt Validationが有効になります。有効にした場合は、Mac App Storeから有効な領収書(Receipt)が含まれている場合のみゲームが実行されます。App Storeで公開するためにAppleにゲームを提出する際に、これを使用します。これにより、購入したもの以外のどれかのコンピュータ上でゲームが実行されることを防ぐことができます。この機能は、強力なコピー防止機能はまったく実装していないことに注意してください。特に、1つのUnityのゲームに対する潜在的なクラックは、他のUnityコンテンツに対しても有効となります。このような理由から、Unityのプラグイン機能を使って独自の領収書のValidationコードを実装し、この設定とあわせて使用することを推奨します。しかし、Appleで画面設定ダイアログを表示する前に初期的にプラグインによるValidationを行うことが必要であることから、このオプションをオンにするべきであり、そうしないとAppleが提出を拒否することがあります。

iOS
バンドル識別子
Bundle Identifierの文字列は、ビルドしたゲームのプロビジョニングプロファイルと一致する必要があります。識別子の基本的な構成はcom.CompanyName.GameNameです。この構成は、あなたが居住している国によって異なりますので、必ずあなたの開発者アカウントでアップルから提供された文字列をデフォルトとして下さい。GameNameは、AppleのiPhone Developer CenterのWebサイトから管理できるプロビジョニング証明書でセットアップされています。どう実行されるかの詳細についてはApple iPhone Developer Center website を参照下さい。
Stripping Level(Unity Proのみ)
ほとんどのゲームでは必要なすべてのDLLを使用しません。このオプションを使用すると、使用されない部分を削減して、iOSデバイス上のビルドしたプレーヤーのファイル容量を減らすことができます。もしこのオプションにより通常削減されるクラスがゲームで使用されている場合は、ビルド時にデバッグメッセージが表示されます。
スクリプト呼び出しの最適化
iOSで良い開発を実践するには、例外処理(内部的なもの、またはtry /catchブロックを使用したもの)に依存しないことです。デフォルトのSlow and Safe`オプションを使用する場合は、デバイス上で発生した例外がキャッチされ、スタックトレースが提供されます。Fast but no Exceptionsオプションを使用する場合、例外が発生するとゲームがクラッシュし、スタックトレースが提供されません。しかし、プロセッサが例外処理をする必要がないため、ゲームの実行速度が速くなります。ゲームを一般に向けてリリースするときはFast but no Exceptionsオプションを使用して公開することが最善です。

Android
バンドル識別子
Bundle Identifierの文字列は、Androidマーケットに公開され、デバイス上にインストールされたときのアプリケーションの一意の名前です。識別子の基本的な構成はcom.CompanyName.GameNameで、任意に選ぶことが出来ます。Unityではこのフィールドは、利便性のためにiOS Player Settingsと共有されています。
Stripping Level(Unity Proのみ)
ほとんどのゲームでは必要なすべてのDLLを使用しません。このオプションを使用すると、使用されない部分を削減して、Androidデバイス上のビルドしたプレーヤーのファイル容量を減らすことができます。
class-QualitySettings
Unityは、レンダリングの画質レベルを設定することができます。一般的に言えば、画質はフレームレートを犠牲にしているのため、モバイル機器や古いハードウェアで最高画質をターゲットにすることはゲームプレイに悪影響を与えるため、正しいとはいえません。Quality Settings は、 は2つのメインエリアに分けることができます。上部には、次のマトリクスがあります:

Unityには、簡単に参照できるように、画質オプションの組み合わせに名前をつけることができます。マトリクスの行により、各画質水準をどのプラットフォームに割り当てるかを選択することができます。マトリクスの下のDefault行は画質レベルそのものはではないが、各プラットフォームで使用されるデフォルトの画質レベルを設定します。(列に緑のチェックボックスは、現在、そのプラットフォームの選択レベルを表す)Unityは事前対応6つの画質レベルが付属していますが、マトリックスの下にあるボタンを使用して自分のレベルを追加することができます。不要な品質レベルを削除するには、ゴミ箱アイコン(右端の列)を使用することができます。
画質レベルの名前をクリックして編集のために選択することが出来ます。(マトリックスの下のパネルでクリック)

画質レベルのために選ぶことができる画質オプションは次のとおりです:
| Name | 画質レベルを参照するときに使用する名称 |
| Pixel Light Count | フォワードレンダリング使用時の最大ピクセルライト数 |
| Texture Quality | テクスチャを最大解像度で表示するか、何分の一かで表示するか選択する(低い解像度は処理オーバーヘッドを減少させる)。選択肢は次のとおり Full Res(フル解像度), Half Res(1/2解像度), Quarter Res(1/4解像度) and Eighth Res(1/8解像度)。 |
| Anisotropic Textures | 異方性テクスチャの設定 |
| Disabled | 異方性テクスチャは不使用 |
| Per Texture | 異方性レンダリングを各テクスチャごとに有効化 |
| Forced On | 異方性テクスチャを常に使用 |
| AntiAliasing | アンチエイリアシングのレベル設定選択肢は2x、4x 、8x のマルチサンプリング |
| Soft Particles | ソフトブレンドをパーティクルに使用するか |
| Shadows | 使用されるべき影の種類 |
| Hard and Soft Shadows | ハードとソフトの両方の影をレンダリング |
| Hard Shadows Only | ハードの影のみレンダリング |
| Disable Shadows | 影がレンダリングされません。 |
| Shadow resolution | 影は異なる解像度でレンダリングすることができます:Low、Medium、High 、Very High解像度が高いほど、処理オーバーヘッドは大きくなります。 |
| Shadow Projection | ディレクショナル ライトから影を投影する2つの異なる方法があります。Close Fitは高解像度の影をレンダリングするが、カメラのわずかな動きで揺れることがあります。Stable Fitは、低解像度の影をレンダリングするが、カメラの動きで揺れません。 |
| Shadow Cascades | シャドウカスケードの数はゼロ、2または4に設定することができます。カスケードの数値が高いほど高品質が得られますが、処理オーバーヘッドを犠牲にします(詳細はディレクショナル シャドウのページを参照してください)。 |
| Shadow Distance | カメラから影が見える距離範囲。この距離を越えて投影される影はレンダリングされません。 |
| Blend Weights | アニメーションの際にひとつの頂点に影響を与えることができるボーンの数。使用可能なオプションは、1つ、2つ、または4つのボーンです。 |
| VSync Count | レンダリング処理をディスプレイのリフレッシュレートと同期させることでティアリングによる画像乱れ(下記参照)を避けることができる。すべての垂直ブランク(VBlank)と同期をとるか、ひとつおきに垂直ブランクと同期をとるか、まったく同期しないとか、選択することができます。 |
| LOD Bias | LODレベルはオブジェクトの画面上のサイズに基づいて選ばれます。サイズが2つのLODレベルの間にある場合、より詳細か、より詳細でないか、2つのモデルのいずれかの方に偏りをもたせることができます。値は0から1までの小数として設定出来ます。0に近いほど、より詳細でないモデルに偏ります。 |
| Maximum LOD Level | ゲームで使用される最大のLOD。これより高いLODを持つモデルは、使用されずビルド対象から外れます(ストレージとメモリ容量を節約)。 |
| Particle Raycast Budget | パーティクルシステムの衝突のために使用するraycastsの最大数(Medium,Low品質)。パーティクル衝突モジュール を参照のこと。 |
ティアリング
ディスプレイ上の画像は連続して更新されるのではなく、Unityのフレーム更新のような定期的な間隔で発生します。しかし、Unityの更新は必ずしもディスプレイと同期していませんので、ディスプレイがまだ前のフレームをレンダリングしている最中にUnityが新しいフレームを発行してしまうことがありえます。この場合、フレームの変更が発生した画面の中ほどの位置に "ティアリング"と呼ばれる目に見える画像の乱れが発生します。

ティアリングの再現例。画像の拡大部には、画像のシフト(横ずれ)がはっきりと映っています。'
Unityの設定により、ディスプレイが更新していないタイミング、いわゆる「垂直ブランク」のタイミングのみにフレームを切替えさせることが出来ます。画質設定(Quality Settings)のVSyncオプションにより、フレームの切替えをディスプレイの垂直ブランク、あるいはひとつおきの垂直ブランクと同期をとることが出来ます。後者は、ゲームでのフレームのレンダリング処理に要する時間が複数回のディスプレイ更新分だけ時間を要する場合に役に立ちます。
アンチエイリアス
アンチエイリアス処理により、ポリゴンの端の見た目が改善されるため、「ギザギザ」でなくなりますが、画面上では滑らかになります。 しかし、グラフィック カードのパフォーマンスが犠牲になり、使用されるビデオ メモリが増えます (CPU の負担は増えません)。 アンチエイリアス処理のレベルにより、ポリゴンの端がどのくらい滑らかになるか (消費されるビデオ メモリの量) が決定されます。
アンチエイリアス処理を使用しないと、ポリゴンの端が「ギザギザ」になります
6 倍のアンチエイリアス処理を使用すると、ポリゴンの端が円滑になります
ソフトパーティクル
ソフトパーティクルは、シーン中の物体同士の交点付近をフェードアウト処理します。見た目は良くなりますが、計算処理の負荷は高くなり(複雑なピクセルシェーダ)、かつデプステクスチャ をサポートしているプラットフォームでしか動作しません。さらにレンダリングパスとしてディファード ライティング を使用するか、あるいはカメラのレンダラをスクリプトからからデプステクスチャ とする必要があります。
ソフトパーティクルを使用しないとシーンに交点がはっきりと描画されてしまいます。
ソフトパーティクルを使用するとシーンに交点がスムーズに描画されます。
class-RenderSettings
Render Settings には、 Light や Skybox のような、各種視覚要素のデフォルトの値をシーンに含めます。
レンダー設定を確認するには、メニューバーから を選択します。

プロパティ
| Fog | 有効にすると、霧はシーンを通じて描画されます。 |
| Fog Color | 霧の色。 |
| Fog Mode | 霧のモード。 Linear、Exponential (Exp) または Exponential Squared (Exp2) 。 遠くで霧がフェードインする方法を制御します。 |
| Fog Density | 霧の密度で、Exp と Exp2 霧モードでのみ使用されます。 |
| Linear Fog Start/End | 霧の開始および終了距離で、Linear 霧モードでのみ使用されます。 |
| Ambient Light | シーンの周辺光の色。 |
| Skybox Material | スカイボックスが追加されていないカメラに対してレンダリングされるデフォルトのスカイボックス。 |
| Halo Strength | Rangeに関連したすべての光ハローのサイズ。 |
| Flare Strength | シーン内のすべてのフレアの強度。 |
| Halo Texture | ライト内のすべてのハローに対する白熱光として現れる Texture への参照。 |
| Spot Cookie | すべてのスポット ライトに対する Cookie マスクとして現れる Texture2D への参照。 |
詳細
レンダー設定は、プロジェクト内の個々の視覚的共通点を定義するのに使用されます。 おそらく同じ環境に 2 つのレベルがあります。 1 つは昼間、もう 1 つは夜間。 同じメッシュやプレハブを使用して、シーンを投入することができますが、Ambient Lightを昼間ははるかに明るく、夜はより暗い状態に変更できます。
霧
霧を有効にすると、シーンに霧のようなもやをかけることができます。 Fog DensityとFog Colorで個々に霧の見た目や色を調整できます。
霧の追加は、遠くのオブジェクトをフェードアウトさせ、描画しないようにすることで、パフォーマンスを最適化するのに使用されます。 霧を有効化は、このパフォーマンスの最適化を有効化するのには十分ではありません。 これを行うには、遠くのジオメトリが描画されないように、Camera のFar Clip Planeを調整する必要があります。 最初に霧がちゃんと見えるように調整するのがベストです。 次に Camera のファー クリップ プレーンを、霧がフェードアウトする前に、ジオメトリが切り取られるまで小さくします。

霧をオフにしたシーン

霧をオンにした同じシーン
霧は、並行投影カメラモードで均一にレンダリングされます。 これは、シェーダで、透視後空間の Z 座標を霧座標として出力します。 透視後の Z 座標は、実際、並行投影カメラでの霧に適していません。 これを行う理由は、 高速に行われ、余計な計算を必要としないため、並行投影カメラを扱うと、すべてのシェーダが若干遅くなります。
ヒント
- ゲームがレンダー設定を徹底して微調整することで与える視覚的を過小評価しないでください。
- レンダー設定は、シーンごとに行われます。 ゲーム内のシーンによってレンダー設定は異なります。
class-ScriptExecution
通常、異なるスクリプトのAwake、OnEnable、Update関数はロード順に処理されてしまいます(すなわちランダム)。そこでScript Execution Order(スクリプト実行順)設定により明示的に順序を変更できます。

スクリプトをInspector通して追加するには「+」ボタンを押下し、スクリプトをドラッグすることで実行順を変更できます。スクリプトをDefault Time(デフォルト時間)の上下に配置することが可能であり、上に配置した場合はデフォルト時間よりも早く実行され、下に配置した場合はデフォルト時間より遅れて実行される。ダイアログの上から下の順序でスクリプトは実行されます。表示されていないスクリプトはデフォルト時間で実行され、各々の順序はランダムとなります。
Page last updated: 2012-11-18class-TagManager
Tag Manager により、Layer と Tag を設定できます。 これを表示するには、 を選択します。

タグ マネージャ
プロパティ
| Tags | 最後の要素を入力することで新しい要素を追加できます。 |
| User Layer 8-31 | カスタムで名前を付けられたユーザー レイヤーを追加できます。 |
詳細
レイヤーを使用して、オブジェクトの特定のグループにのみ、光線を投射し、光をレンダリングまたは適用できます。 GameObject inspector のレイヤーを選択できます。 * レイヤーの使用法については、here を、タグについては、here を参照してください。
タグ名を使用して、スクリプトからオブジェクトを素早く検索するためにタグが使用されます。 新しいタグを追加すると、GameObject タグ ポップからそのタグを選択できます。
Page last updated: 2012-11-13class-TimeManager

タイム マネージャ
プロパティ
| Fixed Timestep | 物理特性計算およびFixedUpdate()イベントの実行時を示すフレーム レートに依存しない間隔。 |
| Maximum Allowed Timestep | フレーム レートが低い場合に、最悪のシナリオを制限するフレーム レートに依存しない間隔。 物理特性計算およびFixedUpdate()イベントは、指定した時間を超えて実行されません。 |
| Time Scale | 時間が進む速度。 ブレット タイム効果をシミュレートするには、この値を変更します。 1 は、リアルタイムになります。 0.5 だとその半分の速度で、2 だと速度が 2 倍になります。 |
詳細
Fixed Timestep
Fixed time stepping は、安定した物理特性のシミュレーションを行うには非常に重要です。 すべてのコンピュータが平等に作られているわけではなく、Unity のゲームは異なるハードウェア構成で実行されるため、パフォーマンスも異なります。 そのため、物理特性は、ゲームのフレーム レートとは独立して、計算される必要があります。 衝突検出や剛体移動などの物理特性計算は、フレーム レートに依存しない個々の固定手順で実行されます。 これにより、異なるコンピュータ上あるいはフレーム レートに変更が発生した場合に、シミュレーションがより一貫したものになります。 例えば、多くのゲーム画面上の表示により、あるいは、ユーザーがバックグラウンドで別のアプリケーションを起動したために、フレーム レートがドロップする場合があります。
固定時間ステップはこのように計算されます。 各フレームが画面上で描画される前に、Unity は、固定デルタ時間で固定時間を進め、現在の時間に達するまで、物理特性計算を実行します。 これは直接Fixed Timestepプロパティに関連しています。 Fixed Timestepの値が小さいほど、物理特性計算が頻繁に行われます。 1 秒あたりの固定フレームの数は、1 をFixed Timestepで割ることで計算されます。 そのため、1 / 0.02 = 50 固定フレーム毎秒、1 / 0.05 = 20 固定フレーム毎秒となります。
簡単に言えば、固定アップデート値が小さくなるほど、物理特性計算の精度は高くなりますが、その分、CPU への負担も大きくなります。
Maximum Allowed Timestep
Fixed time stepping により、物理特性のシミュレーションが安定します。 しかし、ゲームが物理特性に大きく依存し、すでにか実行速度が遅いか、フレーム レートが下がることがある場合、パフォーマンスに悪影響を及ぼします。 フレームの処理時間がかかるほど、次のフレームに対して、より多くの固定アップデート ステップを実行する必要があります。 これにより、パフォーマンスが悪化します。 このシナリオを回避するため、Unity iOS は、物理特性計算が指定した閾値より長く実行されないようにするための、Maximum Allowed Timestepを導入しています。
フレームがMaximum Allowed Timestepで指定した時間よりも処理に時間がかかる場合、物理特性は、そのフレームの処理に秒しかかからないようなふりをします。 言い換えれば、フレーム レートがある閾値以下になると、剛体が若干減速し、CPU が追いつくことができるようになります。
Maximum Allowed Timestepは、物理特性とFixedUpdate()イベントの両方に影響します。
Maximum Allowed Timestepは、Fixed Timestepで、秒単位で指定されます。 そのため、0.1 を設定すると、フレーム レートが 1 / 0.1 = 10 フレーム毎秒を下回ると、物理特性とFixedUpdate()イベントが減速します。
通常のシナリオ
- Fixed Timestepが 0.01 の場合、physx、fixedUpdate およびアニメーションは 10ms ごとに処理する必要があります。
- フレーム レートが 〜33 ms の場合、固定ループは、平均で、視覚フレームごとに 3 回実行されます。
- しかし、フレーム時間が一定の値に固定されず、シーンの状態、OS バックグラウンド タスクなどを含む多くの要因に依存します。
- 3 つの理由から、フレーム時間が 40〜50msに達することがありますが、これは、固定ステップ ループが 4〜5回実行されるということです。
- 固定時間ステップ タスクがかなり重く、physx で時間が消費される場合、fixedUpdates とアニメーションによりフレーム時間が更に10ms 延長されますが、これは、これらのすべての固定時間ステップ タスクがもう 1 回繰り返されるということです。
- 5 に記載するプロセスが 10 まで延長し、固定ステップのループにより時間がかかる場合があります。
- そのため、Maximum Allowed Timestepが導入され、これが、1 つの視覚フレーム 中に処理できる physx、fixedUpdates および アニメーションの回数を制限する方法になります。 Maximum Allowed Timestepを 100 ms に設定し、Fixed Timestepが 10 ms の場合、固定ステップ タスクが視覚フレームごとに 10 回まで実行されます。 そのため、固定タイムステップの繰り返し数の増加による、軽いパフォーマンス障害が大きなパフォーマンス障害を引き起こすことがあります。 Maximum Allowed Timestepを 30 ms まで下げると、最大固定ステップの繰り返し数が 3 に制限されますが、これは、physx、fixedUpdates および アニメーションの回数がフレーム時間を吹き飛ばすことありませんが、この制限に対して悪影響をおよぼすことがあります。 パフォーマンス障害が発生すると、アニメーションと物理特性が若干減速します。
ヒント
- スクリプティングを通じて、Time Scaleを動的に変更することで、時間の経過と共にプレイヤーに制御を与えます。
- ゲームの物理特性が重いか、FixedUpdate()イベントでかなりの時間を消費する場合、Maximum Allowed Timestepを 0.1 に設定します。 これにより、物理特性が 1 秒あたり 10 フレーム以下でゲームを駆動するのを防ぎます。
comp-MeshGroup
3D Mesh は、Unity の主なグラフィック プリミティブです。 通常またはスキン メッシュ、トレイルまたは 3D 線をレンダリングするための、Unity にある各種コンポーネント。
Page last updated: 2012-11-13class-MeshFilter
Mesh Filter はアセットからメッシュを取り、画面上でのレンダリングのために、Mesh Renderer に渡します。

メッシュ フィルタは、メッシュ レンダラと共に、モデルを画面に表示します。 ''
プロパティ
| Mesh | レンダリングされる mesh への参照。 Meshは Project フォルダにあります。 |
詳細
メッシュ アセットをインポートすると、メッシュがスキンされる場合に、Unity が自動的に Skinned Mesh Renderer を作成し、そうでない場合は、メッシュ レンダラと共に、メッシュ フィルタを作成します。
シーン内のメッシュを確認するには、Mesh Renderer を GameObject に追加します。 これは自動で追加されますが、オブジェクトから削除した場合は、再度手動で追加する必要があります。 メッシュ レンダラがない場合も、メッシュはシーン (コンピュータのメモリ) 内に存在していますが、描画はされません。
Page last updated: 2012-11-13class-MeshRenderer
Mesh Renderer は、Mesh Filter からのジオメトリを使い、オブジェクトの Transform コンポーネントで定義された位置でレンダリングします。

プロパティ
| Cast Shadows (Unity Pro のみ) | 有効にすると、この Mesh は影を生み出す Light が輝いた時に、影を生じさせます。 |
| Receive Shadows (Unity Pro のみ) | 有効にすると、このメッシュはその上に投射される影を表示します。 |
| Materials | モデルをレンダリングする Material のリスト。 |
| Light Probe Anchor | light probeを用いる場合に補間処理の基準位置を決めるTransform |
| Use Light Probes | メッシュのlight probeを有効にします。 |
詳細
3D パッケージからインポートされたメッシュは、複数の マテリアル を使用できます。各マテリアルに対して、メッシュ レンダラのマテリアル リストの項目があるため、メッシュのサブメッシュは別のマテリアルで連弾リングされます。 もしMeshRendererに割り当てられた材料の数が、メッシュ内のサブメッシュの数より多い場合、最初のサブメッシュは、残った材料によりレンダリングされます。これにより、複数の材料でマルチパスレンダリングを設定することができます。
メッシュは、Use Light Probesオプションが有効になっている場合、light probeから光を受けることができます。(詳細はマニュアルのlight probe を参照して下さい)一つの点がlight probe補間のためのメッシュの仮想位置として使用されます。デフォルトでは、この位置はメッシュのバウンディングボックスの中心ですが、Light Probe AnchorプロパティにTransformをドラッグすることによって、これを上書きできます。オブジェクトが2つの隣接するメッシュが含まれている場合にアンカー設定は役に立ちます。なぜなら各メッシュは別のバウンディングボックスを持っているので、デフォルトでは、2つのつなぎ目が不連続的に点灯しまうためです。逆に、同じアンカー位置を使用するように両方のメッシュを設定した場合は、それらは一貫性をもって点灯します。
Page last updated: 2012-11-26class-SkinnedMeshRenderer
Skinned Mesh Rendererはメッシュがスキニングされている場合、インポート時に自動追加されます。

Skinned Mesh Rendererを用いてレンダリングされたアニメキャラクター
プロパティ
| Cast Shadows (Pro only) | オンにした場合、メッシュは影を生成するライトが照らしてたときに影を生成します。 |
| Receive Shadows (Pro only) | オンにした場合、メッシュに対して投影された影を描画します。 |
| Materials | モデルでレンダリングするMaterialsの一覧。 |
| Quality | すべての頂点に影響を与えるボーンの最大数。 |
| Update When Offscreen | オンにした場合、メッシュは画面の外のとき更新される。オフにした場合、アニメーションの更新が無効になります。 |
| Bounds | このBounds(境界)はSkinned Meshを画面外に出たかどうかを判断するために用います。境界線となる箱もSceneViewに表示されます。境界は、インポート時にモデルに含まれるメッシュとアニメーションに基づいて事前計算されます。 |
| Mesh | レンダラーによって使用されるメッシュ。 |
詳細
Skinned Meshは、キャラクターのレンダリングに使用されます。キャラクターはボーンを使用してアニメーション化しており、すべてのボーンは、メッシュの一部に影響を与えます。複数のボーンが同じひとつの頂点に影響を与えることができ、影響の大きさはと重み付けされています。Unityでボーンキャラクターを使用する主なメリットは、ボーンが物理挙動の影響を有効化することができ、キャラクターをRagdollにすることができます。スクリプトの中でボーンを有効/無効にすることができるので、爆発にあたったキャラクターを瞬時にRagdollにすることが出来ます。

Ragdollとして有効にしたSkinned mesh
品質
Unityによりすべての頂点を1、2、または4のいずれかの個数のボーンでスキニングするよう設定ができます。 4ボーンウェイトは外観も良い一方で、処理コストが高くなります。2ボーンウェイトは良い妥協点であり、一般的にゲームで使用することができます。
QualityがAutomaticにセットされた場合Quality Settings のBlend Weights が使用されます。これによりエンドユーザが最適なパフォーマンスの品質設定を選択することができます。
画面外の更新とBounds(境界)
デフォルトでは、表示されていないスキンメッシュは更新されません。メッシュが画面内に戻ってくるまでのスキニングは更新されません。これはパフォーマンスの最適化の観点で重要なポイントで、たくさんのキャラクターが走り回っても非表示のときは処理負荷を抑えることができます。
しかし、VisibilityはメッシュのBounds(境界)により決定され、インポート時に事前計算されます。Unityはすべてのアタッチされたアニメーションを境界の大きさ事前計算において考慮しますが、場いいによってはUnityが強化をユーザのニーズに合うように計算できないことがあります。例えば:(いずれも問題となってくるのはボーンや頂点を事前計算された領域外に押し出されるときです。)
- ランタイム時にアニメーションを追加した場合
- Additive(追加)アニメーションを使用した場合
- ボーンの位置をプロシージャで影響を及ぼした場合
- 頂点シェーダを使用して、事前計算された境界の外に頂点を押し出した場合
- Ragdollを使用した場合。
これらの例では、2つの解決方法があります:
- Bounds(境界)を修正してメッシュの潜在的な境界の大きさにマッチできるように修正します。
- Update When OffscreenをSkinおよびレンダリングされたSkinned Meshに対して常にオンにします。
殆どの場合、最初の方法がパフォーマンスに与える影響も少ないため、選択することになりますが、二つめの方法はパフォーマンスを重視する必要がなく、境界の大きさを予測することが難しい場合のみ使用して下さい。(例えばRagdollの場合)
SkinnedMeshをRagdollで上手に機能させるために、Unityは、ルートボーンに対してSkinnedMeshRendererを再マッピングします。しかしUnityモデルファイルに対してひとつのみSkinnedMeshRendererがある場合にこれを行います。 したがってすべてのSkinnedMeshRendererをルートボーンや子オブジェクトにアタッチできない場合やRagdollを使用する場合、この最適化をオフにする必要があります。
ヒント
- Skinned Meshが現在インポートできるのは:
- Maya
- Cinema4D
- 3D Studio Max
- Blender
- Cheetah 3D
- XSI
- FBXフォーマットをサポートしている他のあらゆるツール
class-TextMesh
Text Mesh は、テキスト文字列を表示する 3Dジオメトリです。

テキスト メッシュ Inspector'
から新しいテキスト メッシュを作成できます。
プロパティ
| Text | レンダリングされるテキスト。 |
| Offset Z | 描画時に transform.position.z からオフセットをどの程度遠くに描画するか。 |
| Character Size |各文字のサイズ (これはテキスト全体を縮小拡大します)。 | |
| Line Spacing | ''テキストの行間にあるスペースの大きさ。 |
| Anchor | テキストがトランスフォームの位置を共有する点。 |
| Alignment | テキスト内で複数の線がどのように配列されるか (左、右、中心)。 |
| Tab Size | タブ ('\t') 文字に挿入されるスペースの大きさ。 これは、スペース バー文字オフセットのマルチプラムです。 |
| Font | テキストのレンダリング時に TrueType Font |
詳細
テキスト メッシュは、交通標識、落書きなどをレンダリングするのにしようされます。テキスト メッシュは、3D シーン内にテキストを置きます。 GUI 用の汎用 2D テキストを作成するには、代わりに GUI Text コンポーネントを使用します。
次の手順に従って、カスタムのフォントでテキスト メッシュを作成します。
- Explorer (Windows) または Finder (OS X) から Project View に TrueType Font - a .ttf ファイルをドラッグして、フォントをインポートします。
- プロパティ ビューで、インポートしたフォントを選択します。
- を選択します。
カスタムの TrueType フォントで、テキスト メッシュが作成されました。 Scene View の Transformコンソールを使用して、テキストを縮小拡大し、移動することができます。
注意: もしテキストメッシュのフォントを変更したいなら、フォントプロパティコンポーネントをセットしておく必要がありますし、正しいフォントテクスチャをフォントマテリアルのテクスチャにセットしなければいけません。このテクスチャはフォントアセットの折りたたみを使って配置されます(This texture can be located using the font asset's foldout.)。 もしテクスチャをセットし忘れたら、メッシュの中のテキストはブロック模様となり、正確にアサインされません。
ヒント
- Textプロパティにテキストを入力する際は、 押したまま、 を押すことで、改行を作成できます。
- 1001freefonts.com から無料の TrueType フォントをダウンロードできます (TrueType フォントが含まれているので、Windows フォントをダウンロードしてください)。
- Textプロパティを記述している場合、文字列にエスケープ キャラクター "\n" を挿入することで、改行を追加できます。
comp-NetworkGroup
このグループには、ネットワーク マルチプレイヤー ゲームに関連したすべての Component を含みます。
Page last updated: 2012-11-13class-NetworkView
Network View は、Unity でマルチプレイヤーのネットワーク ゲームを作成するための入り口です。 使いやすく、強力です。 このため、ネットワーク ビューで作業を開始する前に、ネットワークに関する基本的な原理を理解しておくことをお勧めします。 Network Reference Guide で基本的な原理を学ぶことができます。

ネットワーク ビュー Inspector
State Synchronization または Remote Procedure Callsを含むネットワーク機能を使用するには、GameObject にネットワーク ビューを追加させる必要があります。
プロパティ
| State Synchronization | このネットワーク ビューで使用される State Synchronization のタイプ。 |
| Off | State Synchronization が使用されません。 RPCs のみを送信したい場合に最適です。 |
| Reliable Delta Compressed | 最後の状態と現在の状態の差が送信されます。何も変更されていない場合は、何も送信されません。 このモードは順序付けられます。 パケットが失われた場合は、失われたパケットが自動的に再送信されます。 |
| Unreliable | 完全な状態が送信されます。 より多くの帯域幅を使用しますが、パケット損失の影響は最小化されます。 |
| Observed | ネットワークに送信される Component データ。 |
| View ID | このネットワーク ビューでに対する一意の識別子。 これらの値はインスペクタでは読み取り専用です。 |
| Scene ID | この特定のシーンでのネットワーク ビューの ID 番号。 |
| Type | ランタイムでSceneまたはAllocatedのいずれかに保存されます。 |
詳細
GameObject をネットワーク ビューに追加する場合、次の 2 つを決定する必要があります。
- ネットワーク ビューに送信されたいデータの種類。
- そのデータの送信方法。
送信データの選択
ネットワーク ビューのObservedプロパティは、1 つのコンポーネントを含むことができます。 このコンポーネントには、Transform、Animation、RigidBody またはスクリプトがあります。 Observedコンポーネントがなんであれ、それに関するデータがネットワーク上で送信されます。 ドロップダウンからコンポーネントを選択するか、コンポーネント ヘッダを直接変数にドラッグできます。 RPC コールなどを使用して、直接データを送信しない場合は、同期化をオフにし (データが直接送信されません)、Observed プロパティとして何も設定する必要はありません。 RPC コールは、ネットワーク ビューが 1 つあればよいので、ビューがすでに存在している場合は、RPC にビューを追加する必要はありません。
データの送信方法
Observedコンポーネントのデータを送信するには、 State Synchronization か Remote Procedure Calls のいずれかを用います。
State Synchronization を使用するには、ネットワーク ビューのState SynchronizationをReliable Delta CompressedかUnreliableのいずれかに設定します。 Observedコンポーネントのデータがネットワーク上で自動的に送信されます。
Reliable Delta Compressedは順序付けられます。 パケットは常に送信順に受信されます。 パケットがドロップすると、そのパケットは再送信されます。 以降のパケットはすべて最初のパケットが受信されるまで、列に入れられます。 最後の送信値と現在の送信値間の差のみが送信され、差がない場合は何も送信されません。
スクリプトに従う場合は、スクリプト内でデータを明確に直列化する必要があります。 これはOnSerializeNetworkView()関数内で行います。
function OnSerializeNetworkView (stream : BitStream, info : NetworkMessageInfo) {
var horizontalInput : float = Input.GetAxis ("水平");
stream.Serialize (horizontalInput);
}
アップデートを受信し、そうでない場合に、ストリームに書きこむ変数からの読み取り値を受信すると、上記の関数は常に (ストリームからのアップデートを) horizontalInput に書き込みます。 アップデートを受信または、送信した際にこれと別のことを行いたい場合、BitStream クラスのisWriting属性を使用できます。
function OnSerializeNetworkView (stream : BitStream, info : NetworkMessageInfo) {
var horizontalInput : float = 0.0;
if (stream.isWriting) {
// 送信
horizontalInput = Input.GetAxis ("水平");
stream.Serialize (horizontalInput);
else
// 受信
stream.Serialize (horizontalInput);
// ... 受信した変数で意味のある何かを行います
}
}
OnSerializeNetworkViewが、ネットワーク マネージャのプロジェクト設定で指定されたsendRateに従って呼び出されます。 デフォルトでは、これは 1 秒あたり 15 回になります。
スクリプトでリモート プロシージャ コールを使用したい場合は、スクリプトが追加された同じ GameProject にある NetworkView コンポーネントだけ必要になります。 NetworkView は、他の何かを行うのに使用でき、あるいは、RPC の送信にのみ使用する場合は、スクリプトに従わず、状態同期化をオフできます。 ネットワークから呼び出せる関数には、@RPC属性が必要です。 同じ GameObject に追加されたスクリプトから、 networkView.RPC() を呼び出して、リモート プロシージャ コールを実行します。
var playerBullet : GameObject;
function Update () {
if (Input.GetButtonDown ("Fire1")) {
networkView.RPC ("PlayerFire", RPCMode.All);
}
}
@RPC
function PlayerFire () {
Instantiate (playerBullet, playerBullet.transform.position, playerBullet.transform.rotation);
}
RPC は高い信頼性で送信され、順序付けられます。 RPC の詳細については、RPC Details ページを参照してください。
ヒント
- ネットワーク ビューの使用方法がまだ不明の方は、Network Reference Guide を参照してください。
- リモート プロシージャ コールを使用するのに、State Synchronization を無効にする必要はありません。
- 複数のネットワーク ビューがあり、その内の 1 つで RPC を呼び出したい場合は、GetComponents(NetworkView)[i].RPC()を使用してください。
comp-Effects
エフェクトは視覚効果に関連するコンポーネントが含まれます。
- パーティクルシステム(Shuriken)
- ハロー
- レンズ フレア
- ライン レンダラ
- トレイル レンダラ
- プロジェクタ
- Particlesシステム (Unity4.0以降では「旧Particlesシステム」)
class-ParticleSystem
Unityの中のパーティクルシステムは、大量の煙、蒸気、炎やその他の大気圏エフェクトを作るために使用されます。

新規にParticle Systemを作成するにはParticle Systemゲームオブジェクトを作成(メニューで -> -> を選択)するか、空のGameObjectを作成して ParticleSystemコンポーネントを追加します。(メニューで->を選択)
The Particle System Inspector (Shuriken)
The shows one particle system at a time (the currently selected one), and it looks like this:

Individual particle systems can take on various complex behaviors by using Modules.
They can also be extended by being grouped together into Particle Effects.
If you press the button , this will open up the Extended , that shows all of the particle systems under the same root in the scene tree. For more information on particle system grouping, see the section on Particle Effects.
シーンビューの編集
パーティクルシステムを作成および編集するときまたは拡張されたを使用し、その変更内容はに反映されます。シーンビューにはがあり、現在選択したのプレイバックを編集モードで制御することが出来、アクションとして、、およびが用意されています。

'Playback Time''ラベルをドラッグすることにより、プレイバック時間をこすって調整することができます。全てのプレイバックコントロールにはショートカットキーがあり、それらはPreferencesウィンドウ にてカスタム設定できます。
パーティクルシステムCurve Editor
MinMax Curve
パーティクルシステム モジュールのプロパティの多くは時間の経過とともに値が変化します。そういう変更は MinMax Curves(最大最小カーブ)を通じて表現できます。 これらの時間により活性化するプロパティ(たとえばsize およびspeed)、は右側にプルダウンメニューがあり、それを選択することが出来ます。 Attach:MinMaxDropDown.png Δ
Constant: プロパティの値が時間とともに変化しないので、Curve Editorに表示されません。
Random between constants: プロパティの値は、2つの定数間のランダム値に設定されます。
Curve:プロパティの値が時間とともにCurve Editorのカーブに基づいて変化します。

カーブでアニメーションされたプロパティ
Random between curves: プロパティの値は、2つの最大、最小のカーブの間でランダムに設定され、値は時とともに生成されたカーブに基づいて変化します。

Random Between Two Curvesとして活性化されたプロパティ。
Curve Editorで0とDurationプロパティで指定した値の間で時間を"X"'軸上に散らせたうえで、"Y"軸は各々の時間における活性化されたプロパティの値を示します。 "Y"軸の範囲はCurve Editor右上隅にある数字フィールドで調整することができます。現時点で、Curve Editorはパーティクルシステムの全てのカーブを同じウィンドウで表示します。
同じCurve Editorで複数のカーブを表示。
なお、右下隅の" - "は現在選択されているカーブを削除する一方で"+"はそれを"最適化"します(これにより、高々3つのキーをもつパラメータ化されたカーブとなります)。
3D空間でのベクトルを表現する活性化されたプロパティにはTripleMinMaxカーブを用意していて、これは単にx軸、y軸、z軸を横に並べたシンプルなカーブであり、次のように表示されます:

Curve Editorで複数のカーブの管理
Curve Editorで混乱を防ぐためには、インスペクタでそクリックすることで、カーブのオンとオフに切り替えることが可能です。パーティクルシステムのCurve Editorを、あなたがこのようなものが表示されるはずです。その後Particle System Curvesタイトルバー上で右クリックして、インスペクタから切り離すことができます:
他のウィンドウと同様で、Curve Editorのウィンドウをドックすることが出来ます。
カーブの働きに関する情報については、Curve Editorドキュメンテーションを参照のこと。
パーティクルシステムの色およびグラデーション(Shuriken)

色を扱うプロパティについて、 Particle Systemは Color and Gradient Editor``を使用しています。 それはCurve Editor と同じような働きをします。
カラーベースのプロパティは右側にプルダウンメニューがあり、好きな方法を選択することが出来ます。

Color: 色は常に同じになります(Color Picker を参照してください)。
Gradient: グラデーション(RGBA)はGradient Editor で編集したとおりに、時間とともに変化します。
Random Between Two Colors: 色は時間とともに変化し、Color Picker で指定した二つの値の間でランダムに選択されます。
Random Between Two Gradients: グラデーション(RGBA)はGradient Editor で指定した二つの値の間でランダムに選択され、時間とともに変化します。
Page last updated: 2012-11-21class-Halo
Halo は、空気中の細かいゴミの印象を与えるのに使用される、光源周辺の明るいエリアです。

個々のハロー Component での光
プロパティ
ハローは、Render Settings で設定されたハロー テクスチャを使用します。 何も割り当てられていない場合、デフォルトのものを使用します。 個々のハロー コンポーネントなしで、Light コンポーネントを自動的に設定して、ハローを表示させることができます。
| Color | ハローの色。 |
| Size | ハローのサイズ。 |
ヒント
- シーン ビューでハローを表示するには、Scene View ツールバーの ボタンにチェックを入れます。
class-LensFlare
Lens Flare は、カメラのレンズ内で屈折する光の効果をシミュレートします。 明るい、あるいはぼんやりとした光を表現するのに、、シーンにちょっとした雰囲気を加えるために使用されます。

フレア レンズ Inspector'
レンズ フレアを最も簡単に設定するには、Light のフレア プロパティを割り当てます。 Unity では、Standard Assets package に事前設定されたフレアのサンプルを用意しています。
ない場合は、メニューバーから で空の GameObject を作成し、 でレンズ フレア Component を追加します。 次に、インスペクタでFlareを選択します。
Scene Viewでレンズ フレアの効果を確認するには、シーン ビュー ツールバーで ボタンをチェックします。

シーン ビュー ツールバーでレンズ フレアを表示するには、 ボタンを有効にします
プロパティ
| Flare | レンダリングする Flare 。 フレアは、レンズ フレアの外観のすべての側面を定義します。 |
| Color | シーンのムードによりフィットするよう、一部のフレアに色を付けることができます。 |
| Brightness | レンズ フレアの大きさと明るさ。 |
| Directional | 設定すると、フレアがゲーム オブジェクトの正の Z 軸に沿って方向付けられます。 まるで無限の彼方にあるように表示され、オブジェクトの位置を追跡せず、Z 軸の方向のみ追跡します。 |
詳細
Light コンポーネントのプロパティとしてフレアを直接設定しするか、またはレンズ フレア コンポーネントとして個別に設定できます。 光に追加すると、自動的にその光の位置と方向を追跡します。 より正確に制御する場合に、このコンポーネントを使用します。
フレアを表示するには、Camera は、Flare Layer コンポーネントを追加する必要があります (これはデフォルトで適用されるので、設定は必要ありません)。
ヒント
- レンズ フレアの使用に関しては注意してください。
- 非常に明るいレンズ フレアを使用する場合は、その方向がシーンの最初の光源にフィットするようにしてください。
- 自身のフレアを設計するには、フレア アセットを作成する必要があります。 まず標準アセットの フォルダ内て提供されているフレア アセットの一部をコピーし、次にそれを修正します。
- レンズ フレアは、Colliders にブロックされます。 フレア GameObject とカメラ間のコライダが Mesh Renderer を持たない場合でも、コライダはフレアを非表示にします。
class-LineRenderer
Line Renderer は、3D 空間における 2 つ以上の点の配列を取り、それぞれの間に直線を描画します。 従って、1 つのライン レンダラ コンポーネントを使用して、1 本の直線から、複雑な螺旋まで描画できます。 線は必ず連続した状態になっています。2 本以上の完全に個別の線を描画したい場合、それぞれ自身のライン レンダラを持つ、複数の GameObject を使用する必要があります。
ライン レンダラは、1 ピクセルの細い線はレンダリングしません。 幅があり、テクスチャを貼れるビルボード線をレンダリングします。 Trail Renderer と同じ線レンダリング用のアルゴリズムを使用します。

ライン レンダラ Inspector
プロパティ
| Materials | このリストの最初のマテリアルが線をレンダリングするのに使用されます。 |
| Positions | 接続する Vector3 点の配列。 |
| Size | この線でのセグメントの数。 |
| Parameters | 各線に対するパラメータのリスト。 |
| StartWidth | 最初の線の位置での幅。 |
| EndWidth | 最後の線の位置での幅。 |
| Start Color | 最初の線の位置での色。 |
| End Color | 最後の線の位置での色。 |
| Use World Space | 有効にすると、オブジェクトの位置が無視され、線がワールドの原点周辺でレンダリングされます。 |
詳細
ライン レンダラを作成するには、次の手順に従います。
- を選択します。
- を選択します。
- ライン レンダラにテクスチャか Material をドラッグします。 マテリアル粒子 シェーダを使用するのに最適です。
ヒント
- ライン レンダラは、1 つのフレームにすべての頂点を配置する必要がある場合の高価に使用するのに適しています。
- この線は、Camera を動かすと共に回転するように見える場合があります。 これは意図的なものです。
- ライン オブジェクトは、GameObject で唯一のレンダラである必要があります。
class-TrailRenderer
Trail Renderer は、オブジェクトが動くに連れ、シーン内のオブジェクトの後ろにトレイルを作成するのに使用します。

トレイル レンダラ Inspector
プロパティ
| Materials | トレイルをレンダリングのに使用される Material の配列。 粒子シェーダはトレイルに最適に機能します。 |
| Size | Material配列内の要素数。 |
| Element 0 | トレイルをレンダリングするのに使用されるむ Material への参照。 要素の総数は、Sizeプロパティで決定されます。 |
| Time | トレイルの長さ、単位は秒。 |
| Start Width | オブジェクトの位置でのトレイルの幅。 |
| End Width | 最後でのトレイルの幅。 |
| Colors | トレイルの長さ全体で使用する色の配列。 色でアルファ透明度を設定することもできます。 |
| Color0 to Color4 | 最初から最後までのトレイルの色。 |
| Min Vertex Distance | トレイルのアンカー点間の最小距離。 |
| AutoDestruct | Time秒間でオブジェクトが休止していた場合にオブジェクトを破壊するのに、これを有効にします。 |
詳細
トレイル レンダラは、自動推進体の裏のトレイルまたは、飛行機の翼の先端からの飛行機雲に最適です。 一般的な速度感を出すのに適しています。
トレイル レンダラを使用する際は、GameObject のその他のレンダラは使用されません。 空の GameObject を作成し、トレイル レンダラを唯一のレンダラとして追加するのがベストです。 これにより、トレイル レンダラをそれにフォローさせたいオブジェクトにパレンディングできます。
マテリアル
トレイル レンダラは、粒子 Shader を持つマテリアルを使用する必要があります。 マテリアルに使用される Texture は、正方形の寸法である必要があります (256x256 or 512x512)。
トレイル幅
Timeプロパティと共に、トレイルのStartとEnd Widthを設定することで、トレイルの表示のされ方や動作の仕方を調整できます。 例えば、Start Widthを 1、End Widthを 2 に設定することで、船の後ろにウェイクを作成できます。これらの値はおそらく、ゲーム向けに微調整する必要があります。
トレイル色
5 つの異なる色 / 不透明度の組み合わせを通じて、トレイルを循環できます。 色を使うことで、明るい緑のプラズマ トレイルを徐々に薄暗くし、最終的に鈍い灰色の消失にするか、虹のその他の色を循環させることができます。 色を変更したくない場合は、各色の不透明度を変更し、トレイルを先頭や末尾でフェードインおよびフェードアウトさせると、非常に効率的です。
Min Vertex Distance
Min Vertex Distance値は、トレイルを含むオブジェクトが、トレイルの部分が固定される前にどの程度遠くまで移動するを決定します。 0.1 などの値が低いと、トレイル の部分がより頻繁に作成され、トレイルが滑らかになります。 1.5 のように高い値の場合、外観がよりギザギザになります。 低めの値やより滑らかなトレイルを使用する場合パフォーマンスを若干犠牲にすることになりますが、作成しい効果を達成するには、できる限り最大の値を使用してみてください。
ヒント
- 粒子マテリアルはトレイル レンダラと併用してください。
- トレイル レンダラは、連続するフレームの上に配置する必要があり、すぐには表示できません。
- トレイル レンダラが回転し、その他の Particle Systems 同様、カメラの方を向く面を表示します。
class-Projector
Projector により、Material を錐台と交錯するすべてのオブジェクトに投影できます。 このマテリアルは、投影効果を正しく機能させるための特殊なタイプのシェーダです。提供されている Projector/Light と Projector/Multiply シェーダの使用法の例については、Unity の標準アセットのプロジェクタ プレハブを参照してください。

プロジェクタ Inspector'
プロパティ
| Near Clip Plane | クリップ近面前のオブジェクトは投影されません。 | |
| Far Clip Plane | この距離を超えたオブジェクトは影響を受けません。 | |
| Field Of View | ビューのフィールド (単位: °)。 これは、プロジェクタが Orthographic でない場合にのみ使用されます。 | |
| Aspect Ratio | プロジェクタのアスペクト比。 これにより、プロジェクタの高さと幅を調整でいます。 | |
| Is Ortho Graphic | 有効にすると、景色の代わりに、Orthographic になります。 | |
| Ortho Graphic Size | 投影の Orthographic サイズ。Is Orthographicがオンになっている時にのみ使用されます。 | |
| Materials | オブジェクトに投影されるマテリアル。 | |
| Ignore Layers | Ignore Layersの 1 つにあるオブジェクトは影響を受けません。 デフォルトでは、Ignore Layersはないため、プロジェクタの錐体と交差するすべてのジオメトリは影響を受けません。 |
詳細
プロジェクタでは以下の操作が可能です。
- 影の作成。
- Render Texture を使用して、世界の別の部分を撮影する別の Camera のある三脚に現実の世界のプロジェクタの作成。
- 弾痕の作成。
- 独創的な照明効果。

プロジェクタは、このロボットに Blob Shadow を作成するのに使用されます
簡単な影効果を作成したい場合は、 Prefab をシーンにドラッグするだけです。 マテリアルを修正して、異なる Blob Shadow テクスチャを作成できます。
注意: プロジェクタを作成する際は必ず、プロジェクタのテクスチャのマテリアルのラップ モードをclampに設定してください。そうでない場合、プロジェクタのテクスチャは繰り返し表示され、キャラクターで必要な影の効果が得られません。
ヒント
- プロジェクタの Blob shadow は、環境に適切に影を投影するのに使用すれば、印象的なスプリンター セルのような照明効果を作成できます。
- Falloffテクスチャをプロジェクタのマテリアルで使用しない場合、前後両方に投影し、二重投影が生じる場合があります。 これを修正するには、左側のピクセル列が黒の、アルファのみの Falloff テクスチャを使用します。
comp-ParticlesLegacy
Particles は主に3次元空間に2Dのイメージを描画します。煙、火、水滴、落ち葉などのようなエフェクトに主に利用されます。 Particlesシステム は三つのコンポーネントでできています。それは Particles Emitter, Particles Animator , Particles Rendererです。 もし静的なParticlesを作りたいなら、Particles EmitterとParticles Rendererを一緒に使いましょう。 Particles Animatorは違った方向に動かしたらり、色を変更したりするときに使います。 また、スクリプトでそれぞれのParticlesを独立した動きもつけることもできます。ですので、独自の振る舞いをしたい場合はスクリプトで行うと良いでしょう。
詳しくは Particlesスクリプトリファレンス を参照して下さい。
楕円パーティクル エミッタ(旧パーティクルシステム)
Ellipsoid Particle Emitter は、球体内に粒子を発生させます。 下のEllipsoidプロパティを使用して、球体を拡大縮小および延長させます。

楕円パーティクル エミッタ Inspector
プロパティ
| Emit | If enabled, the emitter will emit particles. |
| Min Size | The minimum size each particle can be at the time when it is spawned. |
| Max Size | The maximum size each particle can be at the time when it is spawned. |
| Min Energy | The minimum lifetime of each particle, measured in seconds. |
| Max Energy | The maximum lifetime of each particle, measured in seconds. |
| Min Emission | The minimum number of particles that will be spawned every second. |
| Max Emission | The maximum number of particles that will be spawned every second. |
| World Velocity | The starting speed of particles in world space, along X, Y, and Z. |
| Local Velocity | The starting speed of particles along X, Y, and Z, measured in the object's orientation. |
| Rnd Velocity | A random speed along X, Y, and Z that is added to the velocity. |
| Emitter Velocity Scale | The amount of the emitter's speed that the particles inherit. |
| Tangent Velocity | The starting speed of particles along X, Y, and Z, across the Emitter's surface. |
| Simulate In World Space | If enabled, the particles don't move when the emitter moves. If false, when you move the emitter, the particles follow it around. |
| One Shot | If enabled, the particle numbers specified by min & max emission is spawned all at once. If disabled, the particles are generated in a long stream. |
| Ellipsoid | 粒子が内側に生じる、X,Y、Z にそった球体のスケール。 |
| MinEmitterRange | 球体の中心で空白のエリアを決定します - これは、球体の縁に粒子を表示させるのに使用します。 |
詳細
楕円パーティクル エミッタ (EPE) は、基本的なエミッタで、 からシーンに Particle System を追加するよう選択すると含まれます。 粒子を生じさせる境界を定義し、粒子に初速を与えます。 ここから、Particle Animator を使用して、希望の効果を達成するのに、時間と共に粒子がどのように変化するかを操作します。
Particle Emitters work in conjunction with Particle Animators and Particle Renderers to create, manipulate, and display Particle Systems. All three Components must be present on an object before the particles will behave correctly. When particles are being emitted, all different velocities are added together to create the final velocity.
Spawning Properties
Spawning properties like Size, Energy, Emission, and Velocity will give your particle system distinct personality when trying to achieve different effects. Having a small Size could simulate fireflies or stars in the sky. A large Size could simulate dust clouds in a musky old building.
Energy and Emission will control how long your particles remain onscreen and how many particles can appear at any one time. For example, a rocket might have high Emission to simulate density of smoke, and high Energy to simulate the slow dispersion of smoke into the air.
Velocity will control how your particles move. You might want to change your Velocity in scripting to achieve interesting effects, or if you want to simulate a constant effect like wind, set your X and Z Velocity to make your particles blow away.
Simulate in World Space
If this is disabled, the position of each individual particle will always translate relative to the Position of the emitter. When the emitter moves, the particles will move along with it. If you have Simulate in World Space enabled, particles will not be affected by the translation of the emitter. For example, if you have a fireball that is spurting flames that rise, the flames will be spawned and float up in space as the fireball gets further away. If Simulate in World Space is disabled, those same flames will move across the screen along with the fireball.
Emitter Velocity Scale
This property will only apply if Simulate in World Space is enabled.
If this property is set to 1, the particles will inherit the exact translation of the emitter at the time they are spawned. If it is set to 2, the particles will inherit double the emitter's translation when they are spawned. 3 is triple the translation, etc.
One Shot
One Shot emitters will create all particles within the Emission property all at once, and cease to emit particles over time. Here are some examples of different particle system uses with One Shot Enabled or Disabled:
Enabled:
- Explosion
- Water splash
- Magic spell
Disabled:
- Gun barrel smoke
- Wind effect
- Waterfall
Min Emitter Range
「Min Emitter Range」は、粒子を発生させることができる楕円内の深さを決定します。 0 に設定すると、楕円の中心から最外部の範囲の任意の場所に粒子を発生させることができます。 1 に設定すると、楕円の最外部の範囲に粒子の場所を限定できます。

「Min Emitter Range が 0 の場合」

「Min Emitter Range が 1 の場合」
ヒント
- Be careful of using many large particles. This can seriously hinder performance on low-level machines. Always try to use the minimum number of particles to attain an effect.
- The Emit property works in conjunction with the AutoDestruct property of the Particle Animator. Through scripting, you can cease the emitter from emitting, and then AutoDestruct will automatically destroy the Particle System and the GameObject it is attached to.
メッシュ粒子エミッタ(旧パーティクルシステム)
Mesh Particle Emitter は、メッシュ周辺に粒子を放出します。 粒子はメッシュ表面に放出されます。これは、粒子をオブジェクトと複雑な方法で相互に作用させたい場合に必要になる場合があります。

メッシュ粒子エミッタ Inspector
プロパティ
| Emit | If enabled, the emitter will emit particles. |
| Min Size | The minimum size each particle can be at the time when it is spawned. |
| Max Size | The maximum size each particle can be at the time when it is spawned. |
| Min Energy | The minimum lifetime of each particle, measured in seconds. |
| Max Energy | The maximum lifetime of each particle, measured in seconds. |
| Min Emission | The minimum number of particles that will be spawned every second. |
| Max Emission | The maximum number of particles that will be spawned every second. |
| World Velocity | The starting speed of particles in world space, along X, Y, and Z. |
| Local Velocity | The starting speed of particles along X, Y, and Z, measured in the object's orientation. |
| Rnd Velocity | A random speed along X, Y, and Z that is added to the velocity. |
| Emitter Velocity Scale | The amount of the emitter's speed that the particles inherit. |
| Tangent Velocity | The starting speed of particles along X, Y, and Z, across the Emitter's surface. |
| Simulate In World Space | If enabled, the particles don't move when the emitter moves. If false, when you move the emitter, the particles follow it around. |
| One Shot | If enabled, the particle numbers specified by min & max emission is spawned all at once. If disabled, the particles are generated in a long stream. |
| Interpolate Triangles | 有効にすると、粒子がメッシュ表面全体に発生します。 無効にすると、粒子はメッシュの頂点からのみ発生します。 |
| Systematic | 有効にすると、粒子は、メッシュで定義した頂点の順番に発生します。 メッシュでの頂点の順番をめったに直接制御しない場合でも、ほとんどの 3D モデリング アプリケーションでは、プリミティブ使用時に非常に対称的な設定を用意しています。 これを機能させるためには、メッシュに面が含まれていない必要があります。 |
| Min Normal Velocity | 粒子がメッシュから排出される最小量。 |
| Max Normal Velocity | 粒子がメッシュから排出される最大量。 |
詳細
メッシュ粒子エミッタ (MPE) は、より簡単な Ellipsoid Particle Emitter よりも発生位置や方向をより正確に制御したい場合に使用します。 高度な効果を生み出すのに使用できます。
MPE は、追加されたメッシュの頂点で粒子を発生させることで機能します。 そのため、ポリゴンの密度がより高いメッシュのエリアでは、より粒子発生の密度が高くなります。
Particle Emitters work in conjunction with Particle Animators and Particle Renderers to create, manipulate, and display Particle Systems. All three Components must be present on an object before the particles will behave correctly. When particles are being emitted, all different velocities are added together to create the final velocity.
Spawning Properties
Spawning properties like Size, Energy, Emission, and Velocity will give your particle system distinct personality when trying to achieve different effects. Having a small Size could simulate fireflies or stars in the sky. A large Size could simulate dust clouds in a musky old building.
Energy and Emission will control how long your particles remain onscreen and how many particles can appear at any one time. For example, a rocket might have high Emission to simulate density of smoke, and high Energy to simulate the slow dispersion of smoke into the air.
Velocity will control how your particles move. You might want to change your Velocity in scripting to achieve interesting effects, or if you want to simulate a constant effect like wind, set your X and Z Velocity to make your particles blow away.
Simulate in World Space
If this is disabled, the position of each individual particle will always translate relative to the Position of the emitter. When the emitter moves, the particles will move along with it. If you have Simulate in World Space enabled, particles will not be affected by the translation of the emitter. For example, if you have a fireball that is spurting flames that rise, the flames will be spawned and float up in space as the fireball gets further away. If Simulate in World Space is disabled, those same flames will move across the screen along with the fireball.
Emitter Velocity Scale
This property will only apply if Simulate in World Space is enabled.
If this property is set to 1, the particles will inherit the exact translation of the emitter at the time they are spawned. If it is set to 2, the particles will inherit double the emitter's translation when they are spawned. 3 is triple the translation, etc.
One Shot
One Shot emitters will create all particles within the Emission property all at once, and cease to emit particles over time. Here are some examples of different particle system uses with One Shot Enabled or Disabled:
Enabled:
- Explosion
- Water splash
- Magic spell
Disabled:
- Gun barrel smoke
- Wind effect
- Waterfall
Interpolate Triangles
エミッタをInterpolate Trianglesするよう設定すると、粒子をメッシュの頂点間で発生させることができます。 このオプションはデフォルトではオフになっているため、粒子は頂点にのみ発生します。

Interpolate Trianglesをオフにした球体 (デフォルト)
このオプションを有効にすると、頂点上および頂点間と、基本的にメッシュの面全体に粒子を発生させることができます (下記参照)。

Interpolate Trianglesをオンにした球体
Interpolate Trianglesを有効にした場合でも繰り返しを生じ、粒子はポリゴンの密度が高いメッシュのエリアでも更に密になります。
Systematic
Systematicを有効にすると、粒子がメッシュの頂点順に発生します。 頂点順は、3D モデリング アプリケーションで設定します。

Systematicを有効にした球体に追加された MPE
Normal Velocity
Normal Velocityは、粒子が発生した場所から法線に沿って放出される速度を制御します。
例えば、メッシュ粒子システムを作成し、エミッタとして立方体メッシュを使用し、Interpolate Trianglesを有効にして、Normal Velocity Min、Maxを 1 に設定します。これで、立方体の面から直線で粒子が放出されます。
以下も併せて参照してください。
ヒント
- Be careful of using many large particles. This can seriously hinder performance on low-level machines. Always try to use the minimum number of particles to attain an effect.
- The Emit property works in conjunction with the AutoDestruct property of the Particle Animator. Through scripting, you can cease the emitter from emitting, and then AutoDestruct will automatically destroy the Particle System and the GameObject it is attached to.
- MPE は、シーン内に配置された多くのランプからの光を作成するのにも使用できます。 各ランプの中心に 1 つの頂点を持つメッシュを作成し、そこからハロー マテリアルのある MPE を作成します。 悪のサイエンス フィクションの世界に適しています。
粒子 アニメータ(旧パーティクルシステム)
Particle Animator は、時間の経過と共に粒子を移動させます。その粒子に風を適用したり、粒子システムにドラッグしたり、色循環を適用することができます。

粒子アニメータ Inspector
プロパティ
| Does Animate Color | 有効にすると、粒子が寿命中色を循環させます。 |
| Color Animation | 5 色の粒子が通過します。 この上を通るすべての粒子サイクル - その他よりも寿命の短い粒子は、より高速で動きます。 |
| World Rotation Axis | 粒子がその周囲で回転する、オプションのワールド空間の軸。 これにより、より高度な魔法の効果を作成したり、腐食性のある泡を生物にぶつけることができます。 |
| Local Rotation Axis | 粒子がその周囲で回転する、オプションのローカル空間の軸。 これにより、より高度な魔法の効果を作成したり、腐食性のある泡を生物にぶつけることができます。 |
| Size Grow | これを使用すると、寿命中、粒子のサイズが拡大していきます。 ランダム化された力が粒子を拡散させるため、バラバラにならないよう、サイズを拡大させる場合に適しています。 これを使って、煙を上に昇らせたり、風をシミュレートしたりできます。 |
| Rnd Force | 全フレームで粒子に追加されるランダムの力。 煙により現実的な動きをさせるのに使用します。 |
| Force | 全フレームで粒子に追加されるランダムの力で、世界に関連した測定値です。 |
| Damping | 全フレームで粒子を減速させる度合い。 1 の値だと、ダンピングなしで、粒子はあまり減速しません。 |
| Autodestruct | 有効にすると、粒子アニメータに追加された GameObject は、すべての粒子が消えると破壊されます。 |
詳細
粒子アニメータにより、粒子システムを動的にすることができます。 粒子の色の変更や、力や回転の適用、放出終了時に破壊するよう選択することができます。 粒子システムの詳細については、Mesh Particle EmittersEllipsoid Particle Emitters および Particle Renderers を参照してください。
色のアニメート化
粒子の色を変えたり、フェードイン、フェードアウトしたい場合は、Animate Colorを有効にして、サイクルに色を指定します。 色をアニメート化する粒子システムは、選択した 5 色でサイクルします。 サイクルの速度は、エミッタのEnergy値で決定されます。
粒子をすぐに表示されるのではなく、フェードインさせたい場合は、アルファ値が低くなるように、最初または最後の色を設定します。

Animating Color粒子システム
Rotation Axes
ローカルまたはワールドのRotation Axesのいずれかで値を設定すると、発生した粒子を指定した軸で回転させることができます (''Transform の位置が中心となります)。 これらの軸のいずれかに入力する値が大きいほど、回転速度が上がります。
ローカル軸で値を設定すると、トランスフォームの回転の変更に合わせ、そのローカル軸に合うよう、回転する粒子の回転を調節できます。
ワールド軸で値を設定すると、トランスフォームの回転の変更に関係なく、粒子の回転は一定になります。
力とダンピング
力を加えて、力が指定する方向に粒子を加速化できます。
Dampingを使用して、方向を変えずに粒子を加速または減速できます。
- 値が 1 の場合、Dampingは適用されず、粒子は加速も減速もしません。
- 値が 0 の場合は、粒子はすぐに停止します。
- 値が 2 の場合は、粒子は毎秒その速度が 2 倍になります。
粒子に追加された GameObject の破壊
AutoDestructプロパティを有効にすることで、粒子システムと 追加された GameObject を破壊できます。 例えば、ドラム缶がある場合、Emitを無効にし、AdutoDestructを有効にした粒子システムを追加できます。 衝突時、粒子エミッタを有効にします。 爆発が発生し、爆発後、粒子システムとドラム缶が破壊され、シーンから除去されます。
自動破壊は、粒子が放出後にのみ効果を発揮します。 厳密に言うと、オブジェクトはAutoDestructをオンにした時に、以下の場合に破壊されます。
- すでに一部の粒子が放出されたが、そのすべてが今は存在しない場合、または
- ある時点で emitter がEmitをオンにしたが、今はオフになっている場合。
ヒント
- Color Animationを使用して、寿命中、粒子をフェードインおよびフェードアウトさせることができます。そうでない場合、不快な見た目の泡になります。
- Rotation Axesを使用して、渦巻きのような動きを作成できます。
粒子コライダ(旧パーティクルシステム)
World Particle Collider は、シーン内のその他の Collider に粒子を衝突させるのに使用されます。

Mesh Collider に衝突する Particle System
プロパティ
| Bounce Factor | 粒子が他のオブジェクトに衝突する際に、粒子を加速または減速できます。 この係数は、Particle Animator のDampingプロパティに似ています。 |
| Collision Energy Loss | 衝突時に粒子が失うエネルギー量 (単位: 秒)。 エネルギーが 0 になると、粒子は消滅します。 |
| Min Kill Velocity | 衝突により、粒子のVelocityが、Min Kill Velocity以下になると、粒子はしょう。 |
| Collides with | 粒子が衝突する Layers 。 |
| Send Collision Message | 有効にすると、全粒子がスクリプティングを通じて受信できる衝突メッセージを送信します。 |
詳細
粒子コライダで、粒子システムを作成するには、次の手順に従います。
- を使用して、粒子システムを作成します。
- を使用して、粒子コライダを追加します。
メッセージング
Send Collision Messageを有効にすると、衝突している粒子が、メッセージOnParticleCollision()を粒子の GameObject と粒子が衝突する GameObject にメッセージを送信します。
ヒント
- Send Collision Messageを使用して、弾丸をシミュレートし、衝突時にダメージを適用します。
- 多くの粒子で使用すると、粒子衝突検出が遅くなります。 粒子衝突検出をうまく使用してください。
- メッセージの送信には、大きいオーバーヘッドが伴うので、通常の粒子システムには使用しないでください。
粒子レンダラ(旧パーティクルシステム)
Particle Renderer は、画面上の Particle System をレンダリングします。

粒子レンダラ Inspector
プロパティ
| Materials | 個々の粒子の位置に表示される Materials のリストへの参照。 |
| Camera Velocity Scale | Camera ノ動きに基づいて粒子に適用される伸張の量。 |
| Stretch Particles | 粒子をどのようにレンダリングするかを決定します。 |
| Billboard | 粒子がカメラの方を向くようにレンダリングされます。 |
| Stretched | 粒子が移動している方向を向きます。 |
| SortedBillboard | 粒子が深さで並び替えられます。 ブレンディング マテリアル使用時に使用します。 |
| VerticalBillboard | すべての粒子が X/Z 軸に沿って一律で配列されます。 |
| HorizontalBillboard | すべての粒子が X/Y 軸に沿って一律で配列されます。 |
| Length Scale | Stretch ParticlesをStretchedに設定すると、この値は、粒子が動きの方向にある時間を決定します。 |
| Velocity Scale | Stretch ParticlesをStretchedに設定すると、この値は、粒子の移動速度に基づき、粒子が伸長される速度を決定します。 |
| UV Animation | これらのいずれかを設定すると、タイルのアニメート化テクスチャで使用するために、粒子のUV 座標が生成されます。 下記の Animated Textures に関する項目を参照してください。 |
| X Tile | X 軸にあるフレーム数。 |
| Y Tile | Y 軸にあるフレーム数。 |
| Cycles | アニメーション シーケンスをループさせる回数。 |
詳細
画面上で粒子システムを表示するには、粒子レンダラが必要です。

粒子レンダラにより、武装ヘリのエンジンが画面上で消耗しているように見せます
マテリアルの選択
粒子レンダラを設定する際、適切なマテリアルと、そのマテリアルの両側をレンダリングするシェーダを使用する必要があります。 組み込みの粒子シェーダのいずれかを使用したいことが多くあるでしょう。 使用できる フォルダ内に予め用意されたマテリアルがあります。
マテリアルは、以下の手順で簡単に作成できます。
- メニューバーから を選択します。
- Material には、シェーダ ポップアップがあり、 などから粒子グループでそのシェーダの 1 つを選択します。
- 次にテクスチャを割り当てます。 シェーダによって、使用するテクスチャのアルファ チャンネルが若干異なりますが、ほとんどの場合、アルファ チャンネルが黒の値だと、画面に表示されず、白の値だと画面に表示されます。
粒子の歪曲
デフォルトでは、粒子はビルボードでレンダリングされます。 簡単な正方形のスプライトになります。 煙や爆発、その他ほとんどの粒子効果に適しています。
粒子を作成して、速度ありで伸張させることができます。 火花や稲妻、レーザー光を作成するのに便利です。 Length ScaleとVelocity Scaleは、伸張された粒子の時間に影響します。
Sorted Billboardを使用して、すべての粒子を深さで並び替えることができます。 これが必要な場合があります (殆どの場合は、 粒子シェーダの使用時)。 これは高価なので、レンダリング時に画質に差を出したい時にのみ使用した方がよいでしょう。
アニメート化テクスチャ
粒子システムはアニメート化タイル テクスチャでレンダリングできます。 この機能を使用するには、テクスチャを画像のグリッド外に置く必要があります。 粒子の寿命が尽きると、粒子は画像を循環します。 これは、粒子に寿命を更に追加する場合や、回転する小さい破片を作成するのに便利です。
ヒント
- 粒子シェーダは粒子レンダラと併用してください。
class-EllipsoidParticleEmitter
Ellipsoid Particle Emitter は、球体内に粒子を発生させます。 下のEllipsoidプロパティを使用して、球体を拡大縮小および延長させます。

楕円パーティクル エミッタ Inspector
プロパティ
| Emit | If enabled, the emitter will emit particles. |
| Min Size | The minimum size each particle can be at the time when it is spawned. |
| Max Size | The maximum size each particle can be at the time when it is spawned. |
| Min Energy | The minimum lifetime of each particle, measured in seconds. |
| Max Energy | The maximum lifetime of each particle, measured in seconds. |
| Min Emission | The minimum number of particles that will be spawned every second. |
| Max Emission | The maximum number of particles that will be spawned every second. |
| World Velocity | The starting speed of particles in world space, along X, Y, and Z. |
| Local Velocity | The starting speed of particles along X, Y, and Z, measured in the object's orientation. |
| Rnd Velocity | A random speed along X, Y, and Z that is added to the velocity. |
| Emitter Velocity Scale | The amount of the emitter's speed that the particles inherit. |
| Tangent Velocity | The starting speed of particles along X, Y, and Z, across the Emitter's surface. |
| Simulate In World Space | If enabled, the particles don't move when the emitter moves. If false, when you move the emitter, the particles follow it around. |
| One Shot | If enabled, the particle numbers specified by min & max emission is spawned all at once. If disabled, the particles are generated in a long stream. |
| Ellipsoid | 粒子が内側に生じる、X,Y、Z にそった球体のスケール。 |
| MinEmitterRange | 球体の中心で空白のエリアを決定します - これは、球体の縁に粒子を表示させるのに使用します。 |
詳細
楕円パーティクル エミッタ (EPE) は、基本的なエミッタで、 からシーンに Particle System を追加するよう選択すると含まれます。 粒子を生じさせる境界を定義し、粒子に初速を与えます。 ここから、Particle Animator を使用して、希望の効果を達成するのに、時間と共に粒子がどのように変化するかを操作します。
Particle Emitters work in conjunction with Particle Animators and Particle Renderers to create, manipulate, and display Particle Systems. All three Components must be present on an object before the particles will behave correctly. When particles are being emitted, all different velocities are added together to create the final velocity.
Spawning Properties
Spawning properties like Size, Energy, Emission, and Velocity will give your particle system distinct personality when trying to achieve different effects. Having a small Size could simulate fireflies or stars in the sky. A large Size could simulate dust clouds in a musky old building.
Energy and Emission will control how long your particles remain onscreen and how many particles can appear at any one time. For example, a rocket might have high Emission to simulate density of smoke, and high Energy to simulate the slow dispersion of smoke into the air.
Velocity will control how your particles move. You might want to change your Velocity in scripting to achieve interesting effects, or if you want to simulate a constant effect like wind, set your X and Z Velocity to make your particles blow away.
Simulate in World Space
If this is disabled, the position of each individual particle will always translate relative to the Position of the emitter. When the emitter moves, the particles will move along with it. If you have Simulate in World Space enabled, particles will not be affected by the translation of the emitter. For example, if you have a fireball that is spurting flames that rise, the flames will be spawned and float up in space as the fireball gets further away. If Simulate in World Space is disabled, those same flames will move across the screen along with the fireball.
Emitter Velocity Scale
This property will only apply if Simulate in World Space is enabled.
If this property is set to 1, the particles will inherit the exact translation of the emitter at the time they are spawned. If it is set to 2, the particles will inherit double the emitter's translation when they are spawned. 3 is triple the translation, etc.
One Shot
One Shot emitters will create all particles within the Emission property all at once, and cease to emit particles over time. Here are some examples of different particle system uses with One Shot Enabled or Disabled:
Enabled:
- Explosion
- Water splash
- Magic spell
Disabled:
- Gun barrel smoke
- Wind effect
- Waterfall
Min Emitter Range
「Min Emitter Range」は、粒子を発生させることができる楕円内の深さを決定します。 0 に設定すると、楕円の中心から最外部の範囲の任意の場所に粒子を発生させることができます。 1 に設定すると、楕円の最外部の範囲に粒子の場所を限定できます。

「Min Emitter Range が 0 の場合」

「Min Emitter Range が 1 の場合」
ヒント
- Be careful of using many large particles. This can seriously hinder performance on low-level machines. Always try to use the minimum number of particles to attain an effect.
- The Emit property works in conjunction with the AutoDestruct property of the Particle Animator. Through scripting, you can cease the emitter from emitting, and then AutoDestruct will automatically destroy the Particle System and the GameObject it is attached to.
class-MeshParticleEmitter
Mesh Particle Emitter は、メッシュ周辺に粒子を放出します。 粒子はメッシュ表面に放出されます。これは、粒子をオブジェクトと複雑な方法で相互に作用させたい場合に必要になる場合があります。

メッシュ粒子エミッタ Inspector
プロパティ
| Emit | If enabled, the emitter will emit particles. |
| Min Size | The minimum size each particle can be at the time when it is spawned. |
| Max Size | The maximum size each particle can be at the time when it is spawned. |
| Min Energy | The minimum lifetime of each particle, measured in seconds. |
| Max Energy | The maximum lifetime of each particle, measured in seconds. |
| Min Emission | The minimum number of particles that will be spawned every second. |
| Max Emission | The maximum number of particles that will be spawned every second. |
| World Velocity | The starting speed of particles in world space, along X, Y, and Z. |
| Local Velocity | The starting speed of particles along X, Y, and Z, measured in the object's orientation. |
| Rnd Velocity | A random speed along X, Y, and Z that is added to the velocity. |
| Emitter Velocity Scale | The amount of the emitter's speed that the particles inherit. |
| Tangent Velocity | The starting speed of particles along X, Y, and Z, across the Emitter's surface. |
| Simulate In World Space | If enabled, the particles don't move when the emitter moves. If false, when you move the emitter, the particles follow it around. |
| One Shot | If enabled, the particle numbers specified by min & max emission is spawned all at once. If disabled, the particles are generated in a long stream. |
| Interpolate Triangles | 有効にすると、粒子がメッシュ表面全体に発生します。 無効にすると、粒子はメッシュの頂点からのみ発生します。 |
| Systematic | 有効にすると、粒子は、メッシュで定義した頂点の順番に発生します。 メッシュでの頂点の順番をめったに直接制御しない場合でも、ほとんどの 3D モデリング アプリケーションでは、プリミティブ使用時に非常に対称的な設定を用意しています。 これを機能させるためには、メッシュに面が含まれていない必要があります。 |
| Min Normal Velocity | 粒子がメッシュから排出される最小量。 |
| Max Normal Velocity | 粒子がメッシュから排出される最大量。 |
詳細
メッシュ粒子エミッタ (MPE) は、より簡単な Ellipsoid Particle Emitter よりも発生位置や方向をより正確に制御したい場合に使用します。 高度な効果を生み出すのに使用できます。
MPE は、追加されたメッシュの頂点で粒子を発生させることで機能します。 そのため、ポリゴンの密度がより高いメッシュのエリアでは、より粒子発生の密度が高くなります。
Particle Emitters work in conjunction with Particle Animators and Particle Renderers to create, manipulate, and display Particle Systems. All three Components must be present on an object before the particles will behave correctly. When particles are being emitted, all different velocities are added together to create the final velocity.
Spawning Properties
Spawning properties like Size, Energy, Emission, and Velocity will give your particle system distinct personality when trying to achieve different effects. Having a small Size could simulate fireflies or stars in the sky. A large Size could simulate dust clouds in a musky old building.
Energy and Emission will control how long your particles remain onscreen and how many particles can appear at any one time. For example, a rocket might have high Emission to simulate density of smoke, and high Energy to simulate the slow dispersion of smoke into the air.
Velocity will control how your particles move. You might want to change your Velocity in scripting to achieve interesting effects, or if you want to simulate a constant effect like wind, set your X and Z Velocity to make your particles blow away.
Simulate in World Space
If this is disabled, the position of each individual particle will always translate relative to the Position of the emitter. When the emitter moves, the particles will move along with it. If you have Simulate in World Space enabled, particles will not be affected by the translation of the emitter. For example, if you have a fireball that is spurting flames that rise, the flames will be spawned and float up in space as the fireball gets further away. If Simulate in World Space is disabled, those same flames will move across the screen along with the fireball.
Emitter Velocity Scale
This property will only apply if Simulate in World Space is enabled.
If this property is set to 1, the particles will inherit the exact translation of the emitter at the time they are spawned. If it is set to 2, the particles will inherit double the emitter's translation when they are spawned. 3 is triple the translation, etc.
One Shot
One Shot emitters will create all particles within the Emission property all at once, and cease to emit particles over time. Here are some examples of different particle system uses with One Shot Enabled or Disabled:
Enabled:
- Explosion
- Water splash
- Magic spell
Disabled:
- Gun barrel smoke
- Wind effect
- Waterfall
Interpolate Triangles
エミッタをInterpolate Trianglesするよう設定すると、粒子をメッシュの頂点間で発生させることができます。 このオプションはデフォルトではオフになっているため、粒子は頂点にのみ発生します。

Interpolate Trianglesをオフにした球体 (デフォルト)
このオプションを有効にすると、頂点上および頂点間と、基本的にメッシュの面全体に粒子を発生させることができます (下記参照)。

Interpolate Trianglesをオンにした球体
Interpolate Trianglesを有効にした場合でも繰り返しを生じ、粒子はポリゴンの密度が高いメッシュのエリアでも更に密になります。
Systematic
Systematicを有効にすると、粒子がメッシュの頂点順に発生します。 頂点順は、3D モデリング アプリケーションで設定します。

Systematicを有効にした球体に追加された MPE
Normal Velocity
Normal Velocityは、粒子が発生した場所から法線に沿って放出される速度を制御します。
例えば、メッシュ粒子システムを作成し、エミッタとして立方体メッシュを使用し、Interpolate Trianglesを有効にして、Normal Velocity Min、Maxを 1 に設定します。これで、立方体の面から直線で粒子が放出されます。
以下も併せて参照してください。
ヒント
- Be careful of using many large particles. This can seriously hinder performance on low-level machines. Always try to use the minimum number of particles to attain an effect.
- The Emit property works in conjunction with the AutoDestruct property of the Particle Animator. Through scripting, you can cease the emitter from emitting, and then AutoDestruct will automatically destroy the Particle System and the GameObject it is attached to.
- MPE は、シーン内に配置された多くのランプからの光を作成するのにも使用できます。 各ランプの中心に 1 つの頂点を持つメッシュを作成し、そこからハロー マテリアルのある MPE を作成します。 悪のサイエンス フィクションの世界に適しています。
class-ParticleAnimator
Particle Animator は、時間の経過と共に粒子を移動させます。その粒子に風を適用したり、粒子システムにドラッグしたり、色循環を適用することができます。

粒子アニメータ Inspector
プロパティ
| Does Animate Color | 有効にすると、粒子が寿命中色を循環させます。 |
| Color Animation | 5 色の粒子が通過します。 この上を通るすべての粒子サイクル - その他よりも寿命の短い粒子は、より高速で動きます。 |
| World Rotation Axis | 粒子がその周囲で回転する、オプションのワールド空間の軸。 これにより、より高度な魔法の効果を作成したり、腐食性のある泡を生物にぶつけることができます。 |
| Local Rotation Axis | 粒子がその周囲で回転する、オプションのローカル空間の軸。 これにより、より高度な魔法の効果を作成したり、腐食性のある泡を生物にぶつけることができます。 |
| Size Grow | これを使用すると、寿命中、粒子のサイズが拡大していきます。 ランダム化された力が粒子を拡散させるため、バラバラにならないよう、サイズを拡大させる場合に適しています。 これを使って、煙を上に昇らせたり、風をシミュレートしたりできます。 |
| Rnd Force | 全フレームで粒子に追加されるランダムの力。 煙により現実的な動きをさせるのに使用します。 |
| Force | 全フレームで粒子に追加されるランダムの力で、世界に関連した測定値です。 |
| Damping | 全フレームで粒子を減速させる度合い。 1 の値だと、ダンピングなしで、粒子はあまり減速しません。 |
| Autodestruct | 有効にすると、粒子アニメータに追加された GameObject は、すべての粒子が消えると破壊されます。 |
詳細
粒子アニメータにより、粒子システムを動的にすることができます。 粒子の色の変更や、力や回転の適用、放出終了時に破壊するよう選択することができます。 粒子システムの詳細については、Mesh Particle EmittersEllipsoid Particle Emitters および Particle Renderers を参照してください。
色のアニメート化
粒子の色を変えたり、フェードイン、フェードアウトしたい場合は、Animate Colorを有効にして、サイクルに色を指定します。 色をアニメート化する粒子システムは、選択した 5 色でサイクルします。 サイクルの速度は、エミッタのEnergy値で決定されます。
粒子をすぐに表示されるのではなく、フェードインさせたい場合は、アルファ値が低くなるように、最初または最後の色を設定します。

Animating Color粒子システム
Rotation Axes
ローカルまたはワールドのRotation Axesのいずれかで値を設定すると、発生した粒子を指定した軸で回転させることができます (''Transform の位置が中心となります)。 これらの軸のいずれかに入力する値が大きいほど、回転速度が上がります。
ローカル軸で値を設定すると、トランスフォームの回転の変更に合わせ、そのローカル軸に合うよう、回転する粒子の回転を調節できます。
ワールド軸で値を設定すると、トランスフォームの回転の変更に関係なく、粒子の回転は一定になります。
力とダンピング
力を加えて、力が指定する方向に粒子を加速化できます。
Dampingを使用して、方向を変えずに粒子を加速または減速できます。
- 値が 1 の場合、Dampingは適用されず、粒子は加速も減速もしません。
- 値が 0 の場合は、粒子はすぐに停止します。
- 値が 2 の場合は、粒子は毎秒その速度が 2 倍になります。
粒子に追加された GameObject の破壊
AutoDestructプロパティを有効にすることで、粒子システムと 追加された GameObject を破壊できます。 例えば、ドラム缶がある場合、Emitを無効にし、AdutoDestructを有効にした粒子システムを追加できます。 衝突時、粒子エミッタを有効にします。 爆発が発生し、爆発後、粒子システムとドラム缶が破壊され、シーンから除去されます。
自動破壊は、粒子が放出後にのみ効果を発揮します。 厳密に言うと、オブジェクトはAutoDestructをオンにした時に、以下の場合に破壊されます。
- すでに一部の粒子が放出されたが、そのすべてが今は存在しない場合、または
- ある時点で emitter がEmitをオンにしたが、今はオフになっている場合。
ヒント
- Color Animationを使用して、寿命中、粒子をフェードインおよびフェードアウトさせることができます。そうでない場合、不快な見た目の泡になります。
- Rotation Axesを使用して、渦巻きのような動きを作成できます。
class-ParticleRenderer
Particle Renderer は、画面上の Particle System をレンダリングします。

粒子レンダラ Inspector
プロパティ
| Materials | 個々の粒子の位置に表示される Materials のリストへの参照。 |
| Camera Velocity Scale | Camera ノ動きに基づいて粒子に適用される伸張の量。 |
| Stretch Particles | 粒子をどのようにレンダリングするかを決定します。 |
| Billboard | 粒子がカメラの方を向くようにレンダリングされます。 |
| Stretched | 粒子が移動している方向を向きます。 |
| SortedBillboard | 粒子が深さで並び替えられます。 ブレンディング マテリアル使用時に使用します。 |
| VerticalBillboard | すべての粒子が X/Z 軸に沿って一律で配列されます。 |
| HorizontalBillboard | すべての粒子が X/Y 軸に沿って一律で配列されます。 |
| Length Scale | Stretch ParticlesをStretchedに設定すると、この値は、粒子が動きの方向にある時間を決定します。 |
| Velocity Scale | Stretch ParticlesをStretchedに設定すると、この値は、粒子の移動速度に基づき、粒子が伸長される速度を決定します。 |
| UV Animation | これらのいずれかを設定すると、タイルのアニメート化テクスチャで使用するために、粒子のUV 座標が生成されます。 下記の Animated Textures に関する項目を参照してください。 |
| X Tile | X 軸にあるフレーム数。 |
| Y Tile | Y 軸にあるフレーム数。 |
| Cycles | アニメーション シーケンスをループさせる回数。 |
詳細
画面上で粒子システムを表示するには、粒子レンダラが必要です。

粒子レンダラにより、武装ヘリのエンジンが画面上で消耗しているように見せます
マテリアルの選択
粒子レンダラを設定する際、適切なマテリアルと、そのマテリアルの両側をレンダリングするシェーダを使用する必要があります。 組み込みの粒子シェーダのいずれかを使用したいことが多くあるでしょう。 使用できる フォルダ内に予め用意されたマテリアルがあります。
マテリアルは、以下の手順で簡単に作成できます。
- メニューバーから を選択します。
- Material には、シェーダ ポップアップがあり、 などから粒子グループでそのシェーダの 1 つを選択します。
- 次にテクスチャを割り当てます。 シェーダによって、使用するテクスチャのアルファ チャンネルが若干異なりますが、ほとんどの場合、アルファ チャンネルが黒の値だと、画面に表示されず、白の値だと画面に表示されます。
粒子の歪曲
デフォルトでは、粒子はビルボードでレンダリングされます。 簡単な正方形のスプライトになります。 煙や爆発、その他ほとんどの粒子効果に適しています。
粒子を作成して、速度ありで伸張させることができます。 火花や稲妻、レーザー光を作成するのに便利です。 Length ScaleとVelocity Scaleは、伸張された粒子の時間に影響します。
Sorted Billboardを使用して、すべての粒子を深さで並び替えることができます。 これが必要な場合があります (殆どの場合は、 粒子シェーダの使用時)。 これは高価なので、レンダリング時に画質に差を出したい時にのみ使用した方がよいでしょう。
アニメート化テクスチャ
粒子システムはアニメート化タイル テクスチャでレンダリングできます。 この機能を使用するには、テクスチャを画像のグリッド外に置く必要があります。 粒子の寿命が尽きると、粒子は画像を循環します。 これは、粒子に寿命を更に追加する場合や、回転する小さい破片を作成するのに便利です。
ヒント
- 粒子シェーダは粒子レンダラと併用してください。
class-WorldParticleCollider
World Particle Collider は、シーン内のその他の Collider に粒子を衝突させるのに使用されます。

Mesh Collider に衝突する Particle System
プロパティ
| Bounce Factor | 粒子が他のオブジェクトに衝突する際に、粒子を加速または減速できます。 この係数は、Particle Animator のDampingプロパティに似ています。 |
| Collision Energy Loss | 衝突時に粒子が失うエネルギー量 (単位: 秒)。 エネルギーが 0 になると、粒子は消滅します。 |
| Min Kill Velocity | 衝突により、粒子のVelocityが、Min Kill Velocity以下になると、粒子はしょう。 |
| Collides with | 粒子が衝突する Layers 。 |
| Send Collision Message | 有効にすると、全粒子がスクリプティングを通じて受信できる衝突メッセージを送信します。 |
詳細
粒子コライダで、粒子システムを作成するには、次の手順に従います。
- を使用して、粒子システムを作成します。
- を使用して、粒子コライダを追加します。
メッセージング
Send Collision Messageを有効にすると、衝突している粒子が、メッセージOnParticleCollision()を粒子の GameObject と粒子が衝突する GameObject にメッセージを送信します。
ヒント
- Send Collision Messageを使用して、弾丸をシミュレートし、衝突時にダメージを適用します。
- 多くの粒子で使用すると、粒子衝突検出が遅くなります。 粒子衝突検出をうまく使用してください。
- メッセージの送信には、大きいオーバーヘッドが伴うので、通常の粒子システムには使用しないでください。
comp-RenderingGroup
このグループには、ゲーム内およびユーザー インターフェース要素に関係のある Component が含まれます。 ライティングや特殊効果もこのグループに含まれます。
- カメラ
- フレア レイヤー
- GUI レイヤー
- GUI テキスト
- GUI テクスチャ
- ライト
- Light Probe Group
- Occlusion Area(Unity Proのみ)
- オクルージョンポータル
- スカイボックス
- Level Of Detail(Unity Proのみ)
- 3Dテクスチャ
class-Camera
Cameras は、世界を切り取り、プレイヤーに表示する装置です。 カメラをカスタマイズし、操作することで、本当に自分なりの表現を行うことができます。 シーン内では、カメラを好きなだけ使用できます。 レンダリングの順序や、スクリーン上の位置、又は、スクリーンの一部だけを表示するように設定することも可能です。

「Unity の柔軟なカメラ オブジェクト」
プロパティ
| Clear Flags | 画面のどの部分をクリアするかを決定します。 複数のカメラを使用して、異なるゲーム要素を描画する際に便利です。 |
| Background | ビュー内のすべての要素が描画され、スカイボックスがない場合には、残りの画面に適用される色。 |
| Culling Mask | カメラによってレンダリングされるオブジェクトのレイヤーを含めたり、取り除いたりします。 インスペクタのオブジェクトにレイヤーを割り当てます。 |
| Projection | 景色をシミュレートするカメラの機能を切り替えます。 |
| Perspective | カメラがそのままの景色でオブジェクトをレンダリングします。 |
| Orthographic | カメラが景色感なしで、オブジェクトを均一にレンダリングします。 |
| Size (Orthographic を選択しない場合) | Orthographic に設定した場合のカメラのビューポイントのサイズ。 |
| Field of view | ローカルな Y 軸に沿って測定された (単位: °)、カメラのビュー角度の幅。 |
| Clipping Planes | レンダリングを開始および停止するカメラからの距離。 |
| Near | 描画が行われるカメラに対して最も近い点。 |
| Far | 描画が行われるカメラに対して最も遠い点。 |
| Normalized View Port Rect | 画面座標内で画面上でこのカメラ ビューが描画される場所を示す 4 つ値 (値 0-1)。 |
| X | カメラ ビューが描画される開始の水平位置。 |
| Y | カメラ ビューが描画される開始の垂直位置。 |
| W (Width) | 画面上のカメラの出力の幅。 |
| H (Height) | 画面上のカメラの出力の高さ。 |
| Depth | 描画順でのカメラの位置。 大きい値のカメラが、小さい値のカメラの上に描画されます。 |
| Rendering Path | カメラが使用するレンダリング方法を定義するオプション。 |
| Use Player Settings | このカメラは、プレイヤー設定でいずれの Rendering Path されても使用します。 |
| Vertex Lit | このカメラでレンダリングされたオブジェクトはすべて、Vertex-Lit オブジェクトとしてレンダリングされます。 |
| Forward | Unity 2.x で標準であったように、すべてのオブジェクトがマテリアルごとに 1 つのパスでレンダリングされます。 |
| Deferred Lighting (Unity Pro のみ) | ライティングなしで、すべてのオブジェクトが 1 回びゅおがされ、すべてのオブジェクトのライティングがレンダリング キューの最後で一緒にレンダリングされます。 |
| Target Texture (Unity Pro/Advanced のみ) | カメラ ビューのRender Texture への参照。 この参照を作成すると、この画面に対して、このカメラをレンダリングする機能が無効になります。 |
| HDR | カメラでハイダイナミックレンジ レンダリングをオンにします。 |
詳細
カメラは、プレイヤーにゲームを表示するのに重要です。 想像できるあらゆる種類の効果が得られるよう、カメラをカスタマイズ、記述またはパレンディングできます。 パズル ゲームの場合、パズル全体のビューに対して、カメラを静止に維持できます。 1 人称シューティングの場合、カメラをプレイヤーのキャラクターにパレンディングして、キャラクターの目線に配置することができます。 レース ゲームの場合、カメラがプレイヤーの後を追うように動かすことができます。
複数のカメラを作成し、それぞれに異なる「Depth」を割り当てることができます。 低い「Depth」から高い「Depth」にカメラが描画されます。 言い換えると、「Depth」が 2 のカメラは、「Depth」が 1 のカメラの上に描画されます。「Normalized View Port Rectangle」プロパティの値を調整して、画面上のカメラのビューのサイズ変更や配置を行うことができます。 これにより、ミサイル カムや、マップ ビュー、バック ミラーのような複数の小さいビューを作成できます。
レンダリング パス
Unity は、異なるレンダリング パスをサポートしています。 ゲームの内容や対象のプラットフォーム / ハードウェアに応じて、どのパスを使用するかを選ぶ必要があります。 レンダリング パスによって、主に光や影に影響する機能およびパフォーマンス特性が異なります。 プロジェクトに使用されるレンダリング パスはプレイヤー設定で選択されます。 さらに、各カメラに対して、レンダリング パスを無効にできます。
レンダリング パスの詳細については、rendering paths page を参照してください。
Clear Flags
各カメラは、そのビューをレンダリングする際に、色と深さに関する情報を記憶します。 描画されない画面の部分は空で、デフォルトではスカイボックスが表示されます。 複数のカメラを使用する場合、それぞれのカメラがバッファに色と深さに関する情報を記憶し、各カメラがレンダリングを行う際に、より多くのデータを蓄積します。 シーンで任意のカメラがそのビューをレンダリングするため、「Clear Flags」を設定して、バッファ情報の異なる集合をクリアできます。 これは、次の 4 つのオプションを選択することで行うことができます。
スカイボックス
これはデフォルトの設定です。 画面の空白の部分には、現在のカメラのスカイボックスが表示されます。 現在のカメラにスカイボックスが設定されない場合、Render Settings ( で表示) で選択したスカイボックスに戻ります。 これにより、「Background Color」に戻ります。 そうでない場合は、Skybox component をカメラに追加できます。 スカイボックスを新規作成したい場合は、you can use this guide を参照してください。
Solid Color
画面の空白の部分には、現在のカメラの「Background Color」が表示されます。
Depth Only
環境内でプレイヤーの銃を切り取らずに描画したい場合、1 台のカメラに対して「Depth」を 0 に設定して、環境を描画し、もう 1 台のカメラの「Depth」を 1 に設定して、武器のみを描画させます。 武器を表示するカメラの「Clear Flags」は、「Depth only」に設定する必要があります。 これにより、環境の映像表示が画面に維持されますが、各オブジェクトが 3D スペースに存在する場所に関する情報はすべて破棄されます。 銃が描画されると、銃がどの程度壁に近いかに関係なく、不透明な部分が描画されたものをすべて完全に覆います。

「銃は、カメラの深さバッファが描画前にクリアされた後に、最後に描画されます」
Don't Clear
このモードでは、色および深さバッファのいずれもクリアされません。 その結果、各フレームが次のフレーム上に描画され、シミのように見える効果が得られます。 これは通常ゲームでは使用されず、カスタムのシェーダーと併用される場合に最適です。
Clip Planes
「Near」と「Far Clip Plane」プロパティは、カメラのビューの開始および終了場所を決定します。 カメラの方向に対して垂直に面が配置され、その位置から測定されます。 「Near plane」は、レンダリングされる最も近い場所で、「Far plane」は最も遠い場所になります。
また、切り取り面は、バッファの精度が画面上にどのように分配されるかを決定します。 一般に、精度を高めるには、「Near plane」をできる限り遠くに移動する移動させる必要があります。
近くまたは遠くの切り取り面は、カメラのビューのフィールドで定義された面と共に、一般的にカメラの「錐台」と知られているものを記述します。 Unity では、オブジェクトをレンダリングする際に、この錐台外にあるオブジェクトは表示されません。 これは、錐台カリングと呼ばれます。 錐台カリングは、ゲーム内でオクルージョン カリングが使用されているか否かに関係なく発生します。
パフォーマンス上の理由から、より小さいオブジェクトを早めに間引きたい場合があるでしょう。 例えば、小さい岩や破片を大きい建物よりもより少ない距離で非表示にできます。 これを行うには、小さいオブジェクトを separate layer に置き、Camera.layerCullDistances スクリプト機能を使用して、レイヤーごとの間引き距離を設定できます。
Culling Mask
「Culling Mask」は、レイヤーを使用してオブジェクトのグループを選択的にレンダリングするのに使用されます。 * レイヤーの使用法については、here を参照してください。
異なるレイヤーにユーザー インターフェースを配置し、UI レイヤー自体で個々のカメラでユーザー インターフェース自体をレンダリングするのが一般的です。
また、UI をその他のカメラ ビューの上に表示するには、「Clear Flags」を「Depth only」に設定し、UI カメラの「Depth」をその他のカメラよりも高くする必要があります。
Normalized Viewport Rectangle
「Normalized Viewport Rectangles」は、現在のカメラ ビューが描画される画面の一定の部分を定義するためのものです。 画面の左下ににマップ ビューを、右上にミサイルチップ ビューを配置できます。 少し設計を行うだけで、「Viewport Rectangle」を使用して、独自の動作を作成できます。
「Normalized Viewport Rectangle」を使用して、2 プレイヤー用に 2 分割した画面効果を簡単に作成できます。 2 台のカメラを作成後、カメラの H 値を 0.5 に設定し、プレイヤー 1 の Y 値を 0.5 に、プレイヤー 2 の Y 値を 0 に変更します。これにより、プレイヤー 1 のカメラが画面の半分上から上部に表示され、プレイヤー 2 のカメラが下部で始まり、画面の半分で停止します。

「「Normalized Viewport Rectangle」で作成された 2 プレイヤー用表示」
Orthographic
カメラを「Orthographic」にすると、カメラ のビューからすべての景色が削除されます。 これは、等角または 2D ゲームの作成に便利です 。
霧は Orthographic カメラ モードで均一にレンダリングされるため、期待通りには表示されません。 理由については、component reference on Render Settings を参照してください。

「Perspective カメラ」

「Orthographic カメラ」 オブジェクトはここでの距離で小さくなりません。」
Render Texture
この機能は、Unity Advance ライセンスでのみ使用できます。 カメラのビューを、別のオブジェクトに適用される Texture に配置します。 これにより、競技場のビデオ モニターや監視カメラ、反射などを簡単に作成できます。

「アリーナの実況カメラの作成に使用される Render Texture」
ヒント
- その他の GameObject 同様、カメラはインスタンス化、パレンディング、および記述できます。
- レース ゲームでスピード感を高めるには、「Field of View」を高くします。
- Rigidbody コンポーネントを追加すると、物理特性のシミュレーションでカメラを使用できます。
- シーンで使用できるカメラの数には制限はありません。
- Orthographic カメラは、3D ユーザー インターフェースをさくせいするのにべんりです 。
- 深さのアーティファクトを経験している場合は (互いに近い面のちらつき)、「Near Plane」をできる限り大きく設定してみてください。
- カメラは、ゲーム画面および Render Texture に対して同時にレンダリングできません。いずれかのみになります。
- Pro のライセンスを所有している場合、より独自の効果を得るために、テクスチャにカメラのビューをレンダリングするためのオプション、Render-to-Texture を利用できます。
- Unity には、 から利用できるプリインストールされたカメラ スクリプトが用意されています。 これらを試して、何ができるかを試してみてください。
class-FlareLayer
Flare Layer コンポーネントを Cameras に追加して、画像に Lens Flares を表示させることができます。 デフォルトでは、カメラにはすでにフレア レイヤーが追加されています。
Page last updated: 2012-11-13class-GUILayer
GUI Layer コンポーネントは、2D GUI のレンダリングを可能にするためにカメラに追加されます。
GUI レイヤーをカメラに追加すると、シーン内のすべての GUI Textures と GUI Texts をレンダリングします。 GUIレイヤーは、UnityGUI には一切影響しません。
Inspector 内で GUI レイヤーのチェックボックスをクリックすることで、1 台のカメラでの GUI のレンダリングを有効・無効に出来ます。
Page last updated: 2012-11-13class-GuiText
GUI Text は、画面座標にインポートしたフォントのテキストを表示します。

GUI テキスト Inspector'
注意: Unity 2.0 は、GUI スクリプティング システムである UnityGUI を採用しています。 GUI テキストの代わりに、UnityGUI でユーザー インタフェース要素を作成したい場合もあるでしょう。 GUI Scripting Guide のUnityGUI の使用法について続きを読んでください。
プロパティ
| Text | 表示される文字列。 |
| Anchor | Textが、Transform の位置を共有する点。 |
| Alignment | GUIText 内で複数の線がどのように配列されるか。 |
| Pixel Offset | 画面嬢での GUIText の位置に関連したテキストのオフセット。 |
| Line Spacing | Textの中間位あるスペースの大きさ。 |
| Tab Size | タブ ('\t') 文字に挿入されるスペースの大きさ。 スペース文字のオフセットのマルチプラムとして。 |
| Font | テキストのレンダリング時に Font 。 |
| Material | 描写される文字を含む Material への参照。 設定すると、このプロパティが Font アセット内のプロパティを無効にします。 |
| Font Size | フォントサイズに使用します。0にするとデフォルトサイズにセットされます。ダイナミックフォントのみ適応されます。 |
| Font Style | フォントスタイルに使用します。 (「Normal」「Bold」「Italic or Bold」「Italic」) ダイナミックフォントのみ適応されます。 |
| Pixel Correct | 有効にすると、すべてのText文字がインポートされたフォント テクスチャのサイズで描画されます。 無効にすると、トランスフォームのScaleに基づいて、リサイズされます。 |
| Rich Text | オンにすると、テキストの整形にHTMLスタイルのタグが使えます |
詳細
GUI テキストは、2D の画面にテキストを印刷するために使用されます。 Camera は、テキストをレンダリングするために、GUI Layer を追加する必要があります。 カメラには、デフォルトで GUI レイヤーが含まれているため、GUI テキストを表示したい場合は、削除しないでください。 GUI テキストは、X および Y 軸のみを使用して配置されます。 GUI テキストは、世界座標に配置されるのではなく、画面座標に配置されます。ここでは、(0,0) は画面の左下で、(1,1) は画面の右上になります。
フォントのインポートに関しては、Font page を参照してください。
ピクセル コレクト
デフォルトでは、GUI テキストは、ピクセル コレクトを有効にすることでレンダリングされます。 これにより、GUI テキストが鮮明になり、画面の解像度に関係なく、ピクセルで同じサイズのままになります。
ヒント
- Textプロパティにテキストを入力する際は、 押したまま、 を押すことで、改行を作成できます。
- Textプロパティを記述している場合、文字列にエスケープ キャラクター "\n" を挿入することで、改行を追加できます。
- 1001freefonts.com から無料の TrueType フォントをダウンロードできます (TrueType フォントが含まれているので、Windows フォントをダウンロードしてください)。
class-GuiTexture
GUI Texture は、2D での平坦画像として表示されます。 特に、ユーザー インターフェースの要素、ボタンまたはデコレーション向けです。 その配置や拡大縮小はX および Y 軸に沿って実行され、World Coordinates よりも、Screen Coordinates で測定されます。

GUI テクスチャ Inspector
注意: Unity 2.0 は、GUI スクリプティング システムである UnityGUI を採用しています。 GUI テクスチャの代わりに、UnityGUI でユーザー インタフェース要素を作成したい場合もあるでしょう。 GUI Scripting Guide のUnityGUI の使用法について続きを読んでください。
プロパティ
| Texture | テクスチャの表示として使用される Texture への参照。 |
| Color | 画面上で描画されるTextureに付ける色。 |
| Pixel Inset | GUI テクスチャの拡大縮小および配列のピクセルレベルでの制御に使用されます。 値はすべて GUI テクスチャの Transform の位置に関連して測定されます。 |
| X | テクスチャの最も左のピクセル位置。 |
| Y | テクスチャの最も下のピクセル位置。 |
| Width | テクスチャの最も右のピクセル位置。 |
| Height | テクスチャの最も上のピクセル位置。 |
| Left Border | スケールの影響を受けない左からのピクセル数。 |
| Right Border | スケールの影響を受けない右からのピクセル数。 |
| Top Border | スケールの影響を受けない上からのピクセル数。 |
| Bottom Border | スケールの影響を受けない下からのピクセル数。 |
詳細
GUI テクスチャを作成するには、次の手順に従います。
- Project View で、テクスチャを選択します。
- メニューバーから を選択します。
GUI テクスチャは、ゲームのインターフェースの背景やボタン、その他の要素をプレイヤーに表示するのに最適です。 スクリプティングを通じて、例えば、テクスチャ上にマウスのカーソルを合わせた場合や、積極的にテクスチャをクリックした場合に、テクスチャの各状態に視覚的なフィードバックを簡単に提供できます。 GUI テクスチャがどのように計算されるかの基本的な分析になります。

GUI テクスチャは、これらのルールに従って配置されます

ここで確認される GUI 要素はすべて GUI テクスチャで作成されています
境界
画像の各端でテクスチャで縮小拡大されないピクセル数。 ゲームが実行される解像度がほとんどわからないため、GUI が縮小拡大されるでしょう。 GUI テクスチャの中には、端に境界があり、この境界はピクセルの正確な数となります。 これを機能させるためには、テクスチャから境界のサイズを一致させるように設定します。
Pixel Inset
Pixel Insetは、テクスチャが画面の解像度で縮小拡大されないようにし、固定されたピクセルサイズで維持するために使用します。 これにより、縮小拡大なしでテクスチャをレンダリングできます。 つまり、高解像度でゲームを実行するプレイヤーには、画面のより小さいエリアにテキストが表示され、ゲームプレイの画像に対して、より多くの画面上の場所を持つことができます。
これを効率的に使用するには、GUI テクスチャのトランスフォームノスケールを (0, 0, 0)に設定する必要があります。 Pixel Insetがテクスチャのサイズを完全に制御している場合、Pixel Insetをテクスチャの正確なピクセルサイズに設定できます。
ヒント
- 各階層化された GUI テクスチャの深さは、グローバルな Z 位置ではなく、その個々の Z トランスフォームの位置によって決定されます。
- GUI テクスチャは、メニュー画面や、停止 / エスケープ メニュー画面の作成に最適です。
- 幅と高さに特定のピクセル数を指定したい任意の GUI テクスチャでPixel Insetを使用する必要があります。
class-Light
Light は、ゲームに個性や特色を与えます。 ライトを使用して、シーンやオブジェクトを照らし、完璧な視覚的ムードを作り出すことができます。 ライトを使用して、太陽や、燃えるマッチの明かり、閃光、銃火または爆発などをシミュレートすることができます。

「ライト Inspector」
Unity では、次の 4 種類のライトが用意されています。
- Point light は、電球のように、ある場所から全方向を等しく照らします。
- Directional light は、太陽のように、かなり遠くに配置され、シーン内のすべてを照らします。
- Spot light は、車のヘッドライトのように、1 点から 1 方向に光を放ち、円錐内のオブジェクトのみを照らします。
- Area light は、(ライトマップ焼き込みのときのみ)平面の片側の四角形の範囲に光を全方向に照射する。
ライトは、Shadow と投影することもできます。 影は、Pro 専用の機能です。 影プロパティは、ライトごとに調整可能です。
プロパティ
| Type | ライトオブジェクトの現在のタイプ。 |
| Directional | かなり遠くに配置されたライト。 シーン内のすべてを照らします。 |
| Point | 配置場所から全方向に光らを放つライトで、その「Range」内のすべてのオブジェクトを照らします。 |
| Spot | 「Spot Angle」と「Range」で定義される円錐内のすべての場所を照らすライト。 この領域のオブジェクトのみ照らされます。 |
| Area | 平面の片側の四角形の範囲で全方向に照射されるライトであり、Rangeの範囲にあるオブジェクトに影響を与える。四角形はX、Yプロパティで決定される。エリアライトはライトマップの焼き込みをのときのみ利用可能であり、リアルタイムではレンダリングされない。 |
| Range | オブジェクトの中心から放出される光の範囲。 「Point」/「Spot」のみ。 |
| Spot Angle | 円錐の角度をど決定します (単位: °)。 「Spot」のみ。 |
| Color | 放出される光の色。 |
| Intensity | 光の輝度。 「Point」/「Spot」のデフォルト値は 1 です。「Directional」のデフォルト値は 0.5 です。 |
| Cookie | このテクスチャのアルファ チャンネルが、異なる場所での光の輝度を決定するマスクとして使用されます。 ライトが「Spot」または「Directional」の場合、これは、2D テクスチャである必要があります。 ライトが「Point」の場合、これは、Cubemap である必要があります。 |
| Cookie Size | Cookie の投影を縮小拡大します。 「Directional」のみ。 |
| Shadow Type (Pro のみ) | このライトによって投影される「No」、「Hard」または「Soft」の shadows デスクトップの作成対象にのみ適用されます。 Soft の影はより高価になります。 |
| Strength | 影の強さ。 値は、0 と 1 になります。 |
| Resolution | 影の詳細なレベル。 |
| Bias | 明るい場所でのピクセル位置と、影マップからの値を比較する際に使用されるオフセット。 下記の「Shadow Mapping」と「Bias Property」を参照してください。 |
| Softness | 半影領域を縮小拡大します (ぼやけたサンプルのオフセット)。 「Directional」のみ。 |
| Softness Fade | カメラからの距離に基づく影の柔らかさのフェード。 「Directional」のみ。 |
| Draw Halo | チェックを入れると、ライトの球体状のハローが「Range」に等しい半径で描画されます。 |
| Flare | ライトの位置でレンダリングされる Flare へのオプションの参照。 |
| Render Mode | このライトの重要度。 これは照明の忠実度とパフォーマンスに影響します。下記の「パフォーマンスに関する検討事項」を参照してください。 次のオプションが含まれます。 |
| Auto | このレンダリング方法は、デスクトップの ビルド 対象に対して、近くの光の輝度や現在の Quality Settings に応じて、ランタイムで決定されます。 |
| Important | このライトは常にピクセルごとの品質でレンダリングされます。 これは非常に重要な効果にのみ使用します (プレイヤーの車のヘッドライトなど)。 |
| Not Important | このライトは常により速い、頂点 / オブジェクト ライト モードでレンダリングされます。 |
| Culling Mask | オブジェクトのグループがライトの影響を受けないように選択的に排除するのに使用されます。Layers 参照。 |
| Lightmapping | Lightmapping モード: 「RealtimeOnly」、「Auto」または「BakedOnly」があります。Dual Lightmaps を参照してください。 |
| X | (エリアライトのみ)四角形のライトエリアの幅 |
| Y | (エリアライトのみ)四角形のライトエリアの高さ |
詳細
Unity では、次の 4 種類の基本的なライトが用意されています。 それぞれが、ニーズに合うようカスタマイズできます。
アルファ チャンネルを含むテクスチャを作成し、ライトの「Cookie」に割り当てることができます。 Cookie はライトから投影されます。 Cookie のアルファ マスクは光の量を調節し、面上の明るい点と暗い点を作成します。 シーンに複雑さや雰囲気を付け加えるのに便利です。
Unity の built-in shaders はすべてどの種類のライトともシームレスに機能します。 しかし、VertexLit シェーダは、Cookie や Shadow を表示できません。
Unity Pro では、ウェブ プレイヤーまたはスタンドアロンのビルド対象で、すべてのライトが選択的に Shadows を投影します。 これは、個々のライトの「Shadow Type」プロパティに対して、「Hard Shadows」または「Soft Shadows」のいずれかを選択して行います。 影の詳細については、Shadows を参照してください。
ポイント ライト

ポイント ライトは、ある点から全方向を等しく照らします。 コンピュータ ゲームで最も一般的なライトで、通常、爆発や電球などに使用されます。グラフィック プロセッサでの費用は平均的になりますす (ポイント ライトの影は最も費用がかかりますが)。

「ポイント ライト」
ポイント ライトは Cookie - アルファ チャンネルのある Cubemap テクスチャを持つことができます。 このキューブマップは、全方向に投影されます。

「Cookie のあるポイント ライト」
スポット ライト

スポット ライトは、1 つの円錐で、1 方向のみを照らします。 閃光や、車のヘッドライトまたは街灯に最適です。 グラフィック プロセッサでは、最も費用がかかります。

「スポット ライト」
スポット ライトも Cookie - ライトの円錐に投影されるテクスチャを持つことができます。 窓から差し込んでくる光の効果を作成するのに適しています。 このテクスチャは端で黒くなっており、「Border Mipmaps」オプションを持ち、ラッピング モードを「Clamp」に設定している必要があります。 詳細については、Textures に関する項目を参照してください。

「Cookie のあるスポット ライト」
ディレクショナル ライト

ディレクショナル ライトは主に、屋外のシーンで、日光や月光を再現するのに使用されます。 このライトは、シーン内のオブジェクトのすべての面に影響します。 グラフィック プロセッサでは、最も費用がかかりません。 ディレクショナル ライトからの影 (影をサポートするプラットフォーム用) this page で詳細に説明しています。

「ディレクショナル ライト」
ディレクショナル ライトが Cookie を持っている場合、ライトの Z 軸の中心に投影されます。 Cookie のサイズは、「Cookie Size」プロパティで制御されます。 Cookie のテクスチャのラッピング モードを、Inspector で「Repeat」に設定します。

「雲のような Cookie テクスチャを投影するディレクショナル ライト」
Cookie は、大型の屋外のシーンに素早く詳細を加えるのに便利です。 シーン上で、ライトをゆっくりとスライドさせることで、動く雲の様子を付加することができます。
エリアライト
エリアライトは平面の片側の四角形の範囲に光を照射する。

ライトは照射範囲のなかのすべてのオブジェクトに投影される。四角形の大きさはX、Yプロパティで決定され、平面の法線マップ(すなわちライトの投影面)はライトの正のZ方向と一致する。ライトは四角形の表面全体から放射され、オブジェクトが受けるシェーディングおよびシャドウのはポイントライトやディレクショナルライトに比べるとやわらかいものとなりがちである。
ライティング計算はプロセッサ負荷が大きいため、エリアライトはランタイムで使用することは出来ずライトマップエフェクトとしてのみ使用出来る。
パフォーマンスに関する検討事項
ライトのレンダリングは、 vertex ライティングか pixel ライティングのいずれかで行います。 Vertex ライティングは、ゲーム モデルの頂点でのライティングを計算し、モデルの面上へのライティングを補間します。 Pixel ライトは、画面のピクセルごとに計算されるため、より費用がかかります。 古いグラフィック カードは、頂点ライティングのみサポートしています。
ピクセル ライティングはレンダリングに時間がかかりますが、頂点ライティングではできない効果を生み出すことができます。 法線マッピング、ライト Cookie およびリアルタイムの影はピクセル ライトにのみレンダリングされます。 スポットライト形状およびポイント ライト ハイライトは、ピクセル モードでのレンダリング時でもずっといい状態でレンダリングされます。 上記の 3 つのライトは頂点ライト モードでのレンダリング時は次のように見えます。

「頂点ライティング モードでのポイント ライト」

「頂点ライティング モードでのスポット ライト」

「頂点ライティング モードでのディレクショナル ライト」
ライトはレンダリング速度に大きく影響するため、ライティングの品質とゲームの速度の間のバランスを取る必要があります。 ピクセル ライトは頂点ライトよりもはるかに費用がかかるため、Unity では、ピクセルごとの品質で最も明るい光をレンダリングするだけです。 ウェブ プレイヤーおよびスタンドアロンのビルド対象に対する実際のピクセル ライトの数は、 Quality Settings で設定できます。
「Render Mode」プロパティで、ライトを頂点またはピクセル ライトのいずれかでレンダリングするかを明確に制御できます。 デフォルトでは、Unity はオブジェクトがライトのライトの影響をどれだけ受けるかに基づいて、自動的にライトを分類します。
ピクセル ライトとしてレンダリングされる実際のライトは、オブジェクトごとに決定されます。 つまり、
- 明るいライトのある巨大なオブジェクトは、ピクセル ライトをすべて使用できます (品質設定に応じて)。 プレイヤーがこれらのライトから離れている場合、近くのライトは頂点ライトとしてレンダリングされます。 そのため、巨大なオブジェクトをいくつかの小さいオブジェクトに分割する方がよいでしょう。
詳細については、DesktopiOS または Android ページのグラフィック パフォーマンスの最適化を参照してください。
Cookie の作成
Cookie 作成の詳細については、 tutorial on how to create a Spot light cookie を参照してください。
影マッピングと Bias プロパティ
影は、影マッピングとして知られている手法で実施されます。 これは、カメラが、どの面を他のカメラによって隠されるかを決定する深さマッピングに似ています。 シーンは、ライトの位置で、カメラによって内部でレンダリングされ、ライトによって照らされる各面までの距離を格納する深さマップを作成します。 この種の深さマップは、明確な理由から、影マップとして参照されます。 シーンがメインのビュー カメラにレンダリングされると、ビュー内の各ピクセル位置がライトのスペースに変換され、その距離を影マップ内の対応するピクセルと比較できます。 ピクセルが影マップのピクセルよりも遠くにある場合、おそらく、別のオブジェクトにより光から隠されるため、照明が得られません。

「正しく陰影付けられた円柱」
ライトで直接光を当てられている面は、部分的に影があるように見える場合があります。 これは、ちょうど影マップで指定した距離にあるピクセルが、遠くにあるように見なされることがあるためです (低解像度の画像を使った結果)。 その結果、実際には光が立っているはずが、「影アクネ」として知られる視覚効果が生じ、ピクセルの任意のパターンに影が生じてしまいます。

「円柱で小さな点となっている影アクネ」
影アクネを防ぐには、バイアス値を影マップの距離に追加し、境界線上のピクセルが通過するはずの比較を確実に通過するようにすることです。 これは、影が有効になった場合に、ライトと関連付けられた Bias プロパティが設定した値です。 しかし、影を投影するオブジェクトの近くにある影の部分は間違って照らされることがあるため、バイアス値を大きくしすぎるのは間違いです。 この効果は「ピーター パニング」として知られています (つまり、つながっていない影により、オブジェクトがピーター パンのように地面から離れて飛んでいるように見えるということです)。

「ピーター パニングにより、オブジェクトはまるで地面から離れているように見えます」
ライトに対して、バイアス値を微調整して、影アクネもピーター パニングも発生しないようにする必要があります。 一般に、計算するよりも、目で正しい値を測る方が簡単です。
ヒント
- Cookie のあるスポット ライトは、窓から光を差しこませる場合に極めて効果的です。
- 低密度のポイント ライトは、シーンに深みを与えるのに適しています。
- 最大の性能を得るには、VertexLit シェーダを使用します。 このシェーダは、頂点ごとのライティングのみ行い、ローエンドカードの場合はスループットが非常に高くなります。
- オートライトはライトマップされたオブジェクトに対し、追加の照明を用いることなくダイナミックシャドウを投影する。このためにはライトマップの焼き込みを行う時点でオートライトを有効にする必要性がある。無効の場合リアルタイムのライトとしてレンダリングされる。
class-LightProbeGroup
Light Probe Groupによりシーンに1つ以上のライトプローブ が追加される。

新しいプローブは、インスペクタ上でAdd Probeボタンをクリックして作成することができます。一度、プローブを作成すると、ゲームオブジェクトと同じように選択、移動することができ、Ctrl/Cmd + Backspaceキーにより削除することができます。

ライトプローブをシーンビューで黄色の球として表示
class-OcclusionArea
オクルージョンカリングを適用するためにはOcclusion Areaを作成し移動オブジェクトが配置される場所となるようサイズを修正する必要があります(当然、移動するオブジェクトをStaticとすることはできません)Occlusion Areaを作成するためには、空のゲームオブジェクトにOcclusion Area コンポーネントを追加します。(メニューで を選択)
Occlusion Areaを作成した後IsTargetVolumeチェックボックスをオンにして動くオブジェクトをオクルージョン(無視)します。

移動するオブジェクトのOcclusion Area プロパティ
| Size | Occlusion Areaのサイズを定義 |
| Center | Occlusion Areaの中心を設定します。デフォルトでは、(0,0,0)であり、ボックスの中央に位置しています。 |
| Is View Volume | カメラを配置できる場所を定義します。Occlusion Area内にあるStaticオブジェクトをオクルージョンするときオンにします。 |
| Is Target Volume | 移動するオブジェクトをオクルージョンしたいときに選択すること |
| Target Resolution | エリア内でオクルージョンカリングの精度を決定します。これは、Occlusion Area内のセルのサイズに影響します。注意:これはターゲットエリアのみに影響を与えます。 |
| Low | 処理時間は短くなるものの、精緻さは落ちます |
| Medium | オクルージョンカリングと処理時間のバランスをとります |
| High | オクルージョンカリングと処理時間のバランスをとります。 |
| Very High | High設定で精緻さが不十分な場合に使用しますが、処理時間がさらに長くなることに留意する必要があります |
| Extremely High | 動くオブジェクトでほぼ完全に精緻なオクルージョンカリングが求められる場合に限り使用してください。注意:大量の処理時間を浪費します。 |
Occlusion Areaを作成した後、箱をセルに分割する方法を確認する必要があります。Occlusion Areaがどのように算出されたか確認するためにはEditを選択し、オクルージョンカリングプレビューパネル( Occlusion Culling Preview Panel)でビュー(View)をトグルして選択して下さい。

生成されたオクルージョンのテスト
オクルージョンをセットアップした後、オクルージョンカリングプレビューパネル( Occlusion Culling Preview Panel)上でOcclusion Cullingをオンにして、メインカメラをシーンビューのなかで移動することでテストを行うことが出来ます。

シーンビューのオクルージョンカリング(Occlusion Culling)モード
メインカメラを移動するのに合わせ(Playモードでない場合も)、オブジェクトが無効(disable)となることを確認出来ますオクルージョンデータでエラーが出ないことを確かめる必要があります。移動するのにあわせオブジェクトがビューに割り込んできたらエラーであると認識できますこの場合、エラーを修正するためにオプションとしては画面解像度を変更する(Target Volumesを変更していた場合)か、オブジェクトを移動してエラーが出ないようにする方法があります。オクルージョンでデバッグをするためにはメインカメラを問題が発生しているポジションに移動してスポットでチェックを行うことが出来ます。
処理が完了するとビューエリアでカラフルなキューブを確認することが出来ます。青いキューブはTarget Volumes(Target Volume)のセル分割を意味しています。白いキューブはView Volumes(View Volume)のセル分割を意味しています。もしパラメータが正しく設定されていればいくつかのオブジェクトはレンダリングされません。その理由はカメラのビュー範囲(円錐状)の外にあるか、他のオブジェクトの配置により無視されているか、のいずれかになります。
オクルージョンが完了した後、シーン上で何もオクルージョンされない場合、オブジェクトをより小さなパーツに分解しそのオブジェクトがセルの中に納まるサイズに小分けします。
Page last updated: 2012-11-26class-OcclusionPortal
実行時にオープン・クローズが可能であるオクルージョン プリミティブを作成するには、Unityではオクルージョンポータルを使用します。

| Open | ポータルがオープンの状態(スクリプト編集可能) |
| Center | オクルージョンエリアの中心を設定します。デフォルトでは、(0,0,0)であり、ボックスの中央に位置しています。 |
| Size | オクルージョンエリアのサイズを定義 |
class-Skybox
Skybox は、広大に広がる世界を表示する画面全体周辺のラッパーです。

の下にあるデフォルトのスカイボックスの 1 つ
プロパティ
| Materials | スカイボックスをレンダリングするのに使用される Material で、6 つの Texture を含みます。 このマテリアルは、スカイボックス シェーダを使用する必要があり、適切なグローバル方向に割り当てる必要があります。 |
詳細
スカイボックスは、水平線に複雑な景色の印象を与えるために、シーン内の何かの前でレンダリングされます。 スカイボックスは、6 つのテクスチャの箱で、それぞれが主要な方向 (+/-X, +/-Y, +/-Z) に対応しています。
スカイボックスを実施するためのオプションは 2 つあります。 スカイボックスを Camera (通常はメイン カメラ) に追加するか、Render Settings's Skybox Materialプロパティ内のデフォルトのスカイボックスを設定できます。 Render Settings は、シーン内のすべてのカメラで同じスカイボックスを共有させたい場合に最も便利です。
レンダー設定でデフォルトのスカイボックスの設定を無効にしたい場合、スカイボックス Component のカメラへの追加は便利です。例えば、2 台のカメラを使用した分割画面のゲームで、2 台目のカメラに別のスカイボックスを使用させたいとします。 カメラにスカイボックス コンポーネントを追加するには、クリックして、カメラを強調表示し、 に移ります。
Unity の標準アセットは、 内の 2 つの事前設定されたスカイボックス マテリアルを含みます。
スカイボックスを新規作成したい場合は、use this guide を参照してください。
ヒント
- カメラにスカイボックスを割り当てる場合は、そのカメラのClear modeをスカイボックスに設定します。
- 霧の色をスカイボックスの色に合わせるとよいでしょう。 霧の色は、Render Settings で設定できます。
class-LODGroup
シーンが大きくなるにつれて、パフォーマンスは大きな問題になります。これを管理するための方法の一つは、カメラが被写体からどれだけ離れているかに応じて、さまざまな詳細レベル(Level of Detail)でメッシュを持つことです。 これはLevel of Detail(略語:LOD)と呼ばれます。
ここでは異なるLODを持つオブジェクトをセットアップする一つの方法を説明します。
- シーンに空のGame Objectを作成します。
- メッシュを2バージョン作成し、高解像度のメッシュ(LOD:0、カメラが最も近いとき)、低解像度メッシュ(LOD:0、カメラがより遠いとき)と準備します。
- このオブジェクトにLODGroupコンポーネントを追加します(メニューから->->を選択)。
- 最初のLOD:0のRenderersボックスの上に、高解像度メッシュオブジェクトをドラッグします。"Reparent game objects?"ダイアログボックスでyesを選択します。
- 最初のLOD:1のRenderersボックスの上に、低解像度メッシュオブジェクトをドラッグします。"Reparent game objects?"ダイアログボックスでyesを選択します。
- LOD:2を右クリックして削除します。
この時点では空のオブジェクトはメッシュの両方のバージョンが含まれていて、メッシュカメラがどれだけ離れているかに応じて表示すれば良いか"知ってい??ます"。
LODGroupコンポーネントウィンドウのカメラアイコンを左右にドラッグすることで効果をプレビューすることができます。
LOD0のカメラ
LOD1のカメラ
Scene Viewで表示される内容は:
- オブジェクトが占有しているビューの割合
- 現在LODが現在表示されているか
- 三角形の数
アセット インポート パイプラインのLODベースの命名規則
LODのセットアップをシンプルに行うため、Unityはインポートするモデルの命名規則を持っています。
モデリングツールで_LOD0、_LOD1、_LOD2などで終わる名前を使用してメッシュを作成するだけで、適切な設定を使用したLODのグループが自動作成されます。
命名規則はLOD 0が最高解像度モデルであることを前提としていることに留意してください。
さまざまなプラットフォーム用のLODのセットアップ
各プラットフォーム用のLODの設定を微調整はQuality Settings の、主に とプロパティで行います。
便利なオプション
LODを使用する際に便利ないくつかのオプションがあります。
| Recalculate Bounds | LODGroupに新たに追加された物体が境界ボリュームに反映されていない場合、境界を更新するためにこれをクリックします。これが必要となる一例は、一つの物体がprefab プレハブの一部であり、新しい物体がそのプレハブに追加された場合です。LODGroupに直接追加された物体は自動的に境界が更新されます。 |
| Update Lightmaps | lightmaps のScale in LightmapプロパティをLODの境界に基づいて更新します。 |
| Upload to Importer | LOD境界をインポータにアップロードします。 |
class-Texture3D
Unityは3Dテクスチャのシェーダ・スクリプトからの使用・作成をサポートしています。はいjめのうちは、3Dテクスチャのユーザ使用例が思い当たらないかもしれませんが、3D Color Correction (3Dカラー補正)のようなエフェクトを実装する場合などは作成作業の一部になります。
現時点では3Dテクスチャはスクリプトからしか作成できません。次の短いスクリプトで「中立的」な3Dテクスチャを作成し、3D Color Correction (3Dカラー補正)のようにLookupテクスチャとして用いた場合、補正結果は同一の内容が得られます。
function CreateIdentityLut (dim : int, tex3D : Texture3D)
{
var newC : Color[] = new Color[dim * dim * dim];
var oneOverDim : float = 1.0f / (1.0f * dim - 1.0f);
for(var i : int = 0; i < dim; i++) {
for(var j : int = 0; j < dim; j++) {
for(var k : int = 0; k < dim; k++) {
newC[i + (j*dim) + (k*dim*dim)] = new Color((i*1.0f)*oneOverDim, (j*1.0f)*oneOverDim, (k*1.0f)*oneOverDim, 1.0f);
}
}
}
tex3D.SetPixels (newC);
tex3D.Apply ();
}
Page last updated: 2012-11-20
comp-TransformationGroup
このグループには、物理特性外に位置するオブジェクトを扱うすべての Component が含まれます。
Page last updated: 2012-11-13class-Transform
Transform Componentは、シーン内のすべてのオブジェクトの実際の「Position」、「Rotation」および「Scale」を決定します。 オブジェクトはそれぞれトランスフォームを持ちます。

「Inspector 表示・編集可能なトランスフォーム コンポーネント」
プロパティ
| Position | X、Y、Z 座標でのトランスフォームの位置。 |
| Rotation | X、Y、Z 軸周辺でのトランスフォームの回転 (単位:度)。 |
| Scale | X、Y、Z 軸に沿ったトランスフォームのスケール。1の場合は、元の大きさになります (オブジェクトがインポートされた大きさ)。 |
トランスフォームのプロパティはすべて、トランスフォームの親に対して相対的に設定されます(詳細は以下で参照下さい)。 トランスフォームに親がない場合、プロパティはワールド空間にもとづき設定されます。
トランスフォームの使用
トランスフォームは、X、Y、Z軸を使用して、3D 空間で操作されます。Unity では、これらの軸は、それぞれ赤色、緑色、青色で表示されます。 XYZ = RGBと覚えてださい。

「3 つの軸間の色記号とトランスフォーム プロパティの関係」
トランスフォーム コンポーネントは、Scene View またはインスペクタタ上のプロパティを編集して、直接操作できます。シーンでは、Move、Rotate およびScale ツールを使用して、トランスフォームを操作できます。 これらのツールは、Unity エディタの左上にあります。

「View、Translat、Rotate および Scale ツール」
これらのツールは、シーン内のどのオブジェクトにも使用できます。 オブジェクトをクリックすると、そこにツールのギズモが表示されます。 現在どのツールを選択しているかにより、このギズモの表示が若干異なります。

3 つのギズモはすべてシーン ビューで直接編集できます。
トランスフォームを操作するには、3 つのギズモの軸のいずれかをクリックして、ドラッグすると、色が変わります。 マウスをドラッグするのに合わせ、オブジェクトが軸に沿って、移動、回転または縮小拡大します。 マウス ボタンを放すと、軸が選択されたままになります。 マウスの中ボタンをクリックし、マウスをドラッグして、選択した軸に沿って、トランスフォームを操作できます。

「個々の軸をクリックすると、選択されます」
親子関係
親子関係は、Unity を使用する際に理解する必要のある最も重要なコンセプトのひとつです。 GameObject が別の GameObject の Parent(親)の場合、Child(子)GameObject は、親とまったく同じように移動、回転、縮小拡大します。 体に付いている腕のように、体を回転させると、体に付いているため、腕も動きます。 どのオブジェクトにも複数の子がありますが、親は 1 つだけです。
Hierarchy View 内の GameObject を別の GameObject にドラッグすることで、親を作成できます。 これにより、2 つの GameObject 間で親子関係が作成されます。

「親子階層の例。 左の矢印で示されるすべての GameObject は親です。」
上記の例では、体が腕の親になっており、腕は手の親になっています。 Unity で作成したシーンには、これらの Transform hierarchy の集合が含まれます。 一番上の親オブジェクトは、Root object と呼ばれます。 親を移動、縮小拡大または回転させると、そのトランスフォームでの変更はすべてその子にも適用されます。
子 GameObject のインスペクタでのトランスフォームの値は、親のトランスフォームの値に対し相対的に表示されるということはポイントです。これらは Local Coordinate(ローカル座標)と呼ばれます。 スクリプティングを通じて、ローカル座標の他、 Global Coordinate(グローバル座標)にもアクセスできます。
いくつかの別々のオブジェクトに親子関係を持たせることにより、人体模型の骸骨の構造のように、複合的なオブジェクトを作成できます。 また、シンプルな構造でも便利な効果得られます。例えば、舞台設定が夜中であるホラー ゲームで、懐中電灯を活用した場面を作りたいとします。このオブジェクトを作成するには、spotlightトランスフォームを懐中電灯トランスフォームの親にします。 これで、懐中電灯トランスフォームに変更を加えると、spotlightも変更され、リアルな懐中電灯の効果を生み出すことができます。
不均等なScaleによるパフォーマンス問題や制限
不均等なScaleとは、トランスフォームにおけるScaleがx、y、z方向で異なる値があるケースです(例:(2, 4, 2))。対比となるのが均等であるScaleで、x、y、z方向で同じ値があるケースです(例:(3, 3, 3))。均等でないScaleは限定されたケースでは便利かもしれませんが、通常は出来るかぎり避けるべきです。
不均等なScaleはレンダリングのパフォーマンスにマイナスのインパクトがあります。頂点法線を正しく変換するため、CPU上でメッシュを変換しデータの複製をします。通常はインスタンス間で共有されるメッシュはグラフィックスメモリに保持しますが、このケースではCPUとメモリ双方のコストがインスタンスごとに発生します。
Unityが不均等なScaleを扱う場合、特定の制限事項もあります。
- 特定のコンポーネントは不均等なScaleを完全にサポートしていません。例えば、radiusプロパティがある場合、あるいは似たケースでSphere Collider、Capsule Collider、 Light、Audio Source、等で不均等なScaleにしていたとしても形状は楕円にならず、円状/球状となります。
- 子オブジェクトが不均等なScaleの親を持ち、親に対して相対的に回転した場合、それは非直交配列になり、すなわち歪んで表示されることがあります。不均等なScaleをサポートするコンポーネントについても、非直行配列はサポートしていません。例えば、 Box Colliderを歪ませることはできないため、不均等なScaleとした場合Box Colliderはレンダリングされたメッシュと正確にマッチしなくなります。
- パフォーマンスによる理由で、子オブジェクトがが不均等なScaleの親を持った場合、Scale/Matrixが回転時に自動反映されなくなります。Scaleの更新によっては、はじけるような動作となり、たとえばオブジェクトを親から切り離した場合に発生します。
スケールの重要性
トランスフォームのスケールは、モデリング アプリケーションのメッシュのサイズと、Unity でのメッシュのサイズ間の差分を決定します。 Unity でのメッシュのサイズ (およびトランスフォームのスケール)は、特に物理挙動のシミュレーション中には非常に重要になります。次の3つの要因によって、オブジェクトのスケールが決まります。
- 3D モデリング アプリケーションでのメッシュのサイズ。
- オブジェクトの Import Settings での「Mesh Scale Factor」設定。
- トランスフォーム コンポーネントの「Scale」値。
理想的には、トランスフォームコンポーネントでのオブジェクトの「スケール」を調整する必要はありません。 実際のスケールでモデルを作成するため、トランスフォームのスケールを変更する必要がないというのが最高の選択肢です。 個々のメッシュに対して、Import Settings にメッシュをインポートしたスケールを調整するのが、次に良い選択肢になります。インポート サイズに基づいて、一定の最適化が行われ、調整されたスケール値を持つオブジェクトをインスタンス化すると、パフォーマンスが下がる場合があります。詳細については、Rigidbody コンポーネントのスケールの最適化に関する項目を参照してください。
ヒント
- トランスフォームのパレンディングを行う際、子を適用する前に、親の位置を <0,0,0> に置きます。 これにより、後で悩むケースを多々、未然防止できます。
- Particle System は、トランスフォームの「Scale」の影響は受けません。 パーティクルシステムの縮小拡大を行うには、システムのパーティクルエミッタ、アニメータ、およびレンダラのプロパティを修正する必要があります。
- 物理挙動のシミュレーションに Rigidbody を使用する場合は、Rigidbody ページに記載されている Scale プロパティに関する重要な情報を参照してください。
- から、トランスフォームの軸の色 (およびその他の UI 要素)を変更できます。
- Unityでのスケーリングを回避出来る場合は、そうしてください。 オブジェクトのスケールを 3D モデリング アプリケーション、またはメッシュの Import Settings で完結するようにして下さい。
comp-UnityGUIGroup
UnityGUI は、Unity に組み込まれた GUI 作成システムです。 異なる Control の作成およびこれら制御の内容と概観で構成されます。
UnityGUI の Component により、制御の外観を定義できます。
UnityGUI を使用した制御の作成およびその内容の定義の詳細については、GUI Scripting Guide を参照してください。
Page last updated: 2012-11-13class-GUISkin
GUISkins は、GUI に適用できる GUIStyles です。 Control はその種類によって、型の定義が異なります。 スキンは、単独の制御の代わりに、UI 全体にスタイルを適用させることをができます。

Inspector で確認される GUI スキン
GUI スキンを作成するには、メニューバーから を選択します。
GUI スキンは、UnityGUI システムの一部です。 UnityGUI の詳細についてはGUI Scripting Guide を参照してください。
プロパティ
GUI スキン内のプロパティはすべて、GUIStyle となります。 スタイルの使用法の詳細については、GUIStyle ページを参照してください。
| Font | GUI でのすべての制御に対して使用されるグローバル フォント。 |
| Box | すべてのボックスに使用される Style 。 |
| Button | すべてのボタンに使用される Style 。 |
| Toggle | すべてのトグルに使用される Style 。 |
| Label | すべてのラベルに使用される Style 。 |
| Text Field | すべてのテキスト フィールドに使用される Style 。 |
| Text Area | すべてのテキスト エリアに使用される Style 。 |
| Window | すべてのウィンドウに使用される Style 。 |
| Horizontal Slider | すべての水平スライダ バーに使用される Style 。 |
| Horizontal Slider | すべての水平スライダ サム ボタンに使用される Style 。 |
| Vertical Slider | すべての垂直スライダ バーに使用される Style 。 |
| Vertical Slider | すべての垂直スライダ サム ボタンに使用される Style 。 |
| Horizontal Scrollbar | すべての水平スクロール バーに使用される Style 。 |
| Horizontal Scrollbar Thumb | すべての水平スクロールバー サム ボタンに使用される Style 。 |
| Horizontal Scrollbar Left Button | すべての水平スクロールバー左ボタンに使用される Style 。 |
| Horizontal Scrollbar Right Button | すべての水平スクロールバー右ボタンに使用される Style 。 |
| Vertical Scrollbar | すべての垂直スクロール バーに使用される Style 。 |
| Vertical Scrollbar Thumb | すべての垂直スクロールバー サム ボタンに使用される Style 。 |
| Vertical Scrollbar Up Button | すべての垂直スクロールバー上ボタンに使用される Style 。 |
| Vertical Scrollbar Down Button | すべての垂直スクロールバー下ボタンに使用される Style 。 |
| Custom 1-20 | 制御に適用できる追加のカスタムのスタイル。 |
| Custom Styles | 制御に適用できる追加のカスタムのスタイルの配列。 |
| Settings | GUI 全体の追加の設定。 |
| Double Click Selects Word | 有効にした場合、単語をダブルクリックすると、その単語が選択されます。 |
| Triple Click Selects Word | 有効にした場合、単語をトリプルクリックすると、その単語が選択されます。 |
| Cursor Color | キーボード カーソルの色。 |
| Cursor Flash Speed | テキスト コントロール編集時に、テキスト カーソルが点滅する速度。 |
| Selection Color | テキストの選択したエリアの色。 |
詳細
ゲームに対して、GUI 全体を作成する際、異なる制御タイプすべてに対して、多くのカスタマイズを行う必要がある場合があります。 リアルタイム戦略ゲームやロールプレイングなど、多くの各ジャンルで、1 つ 1 つの制御タイプが必要になります。
個々の制御は特定のスタイルを使用するため、10 数個の個々のスタイルを作成し、そのすべてを手動で割り当てても意味がありません。 GUI スキンはこの問題を解決します。 GUI スキンを作成することで、個々の制御すべてに対して、事前定義された各種スタイルを設定できます。 1 つのコード行でスキンを適用することで、個々の制御に対してスタイルを手動で指定する必要がなくなります。
GUI スキンの作成
GUI スキンはアセット ファイルです。 GUI スキンを作成するには、メニューバーから を選択します。 これにより、Project View に新しい GUIスキンが置かれます。

プロパティ ビューでの新しい GUIスキン
GUI スキンの編集
GUIスキン 作成後、インスペクタに含まれるすべての Styles を編集できます。 例えば、Text Fieldの Style がすべてのテキスト フィールド制御に適用できます。

GUI スキンでのテキスト フィールド スタイルの編集
スクリプトで作成したテキスト フィールドの数に関係なく、すべてこの Style を使用します。 もちろん、必要な場合、別のテキスト フィールドに対する 1 つのテキスト フィールドのスタイル変更を制御できます。 どのように行われるかについては後述します。
GUI スキンの適用
GUI にGUIスキンを適用するには、制御にスキンを読み込み、適用する簡単なスクリプトを使用する必要があります。
//GUIスキンを割り当てられるパブリック変数を作成します
var customSkin : GUISkin;
// OnGUI() 関数でスキンを適用します
function OnGUI () {
GUI.skin = customSkin;
// 希望する制御が作成され、カスタムのスキンと共に表示されます
GUILayout.Button ("私は再スキンされたボタンです");
// スキンを変更または削除できる制御とできない制御があります
GUI.skin = null;
// 個々で作成された制御は、デフォルトのスキンを使用しますが、カスタムのスキンは使用しません
GUILayout.Button ("このボタンはデフォルトの UtilityGUI スキンを使用します");
}
スタイルが異なる同じ制御が 2 つ必要になる場合があります。 このために、新しいスキンを作成し、それを再度割り当てても意味がありません。 代わりに、スキンでCustomスタイルの 1 つを使用します。 カスタムのスタイルにNameを与えると、その名前を、個々の制御の最後の引数として使用できます。
// このスキンのカスタムのスタイルの 1 つには、''MyCustomControl''という名前を持っています
var customSkin : GUISkin;
function OnGUI () {
GUI.skin = customSkin;
// 制御関数の最後の引数として使用したいスタイルの名前を与えます。
GUILayout.Button ("私は再スキンされたボタンです"、"MyCustomControl");
// カスタムのスタイルを無視し、スキンのデフォルトのボタン スタイルを使用することもできます
GUILayout.Button ("私は再スキンされたボタンです");
}
GUIスタイルの扱いに関する詳細については、GUIStyle ページを参照してください。 UnityGUI の使用に関する詳細については、GUI Scripting Guide を参照してください。
Page last updated: 2012-11-13class-GUIStyle
GUI Style は、UnityGUI で使用するためのカスタムの属性の集合です。 1 つのGUI スタイルは、UnityGUI Control の外観を定義します。

''Inspector での GUI スタイル」
スタイルを複数の制御に追加したい場合、GUI スタイルの代わりに、GUI Skin を使用します。 UnityGUI の詳細については、GUI Scripting Guide を参照してください。
プロパティ
| Name | このスタイルを参照するのに使用できるテキスト文字列。 |
| Normal | デフォルトの状態の制御の背景画像およびテキスト色。 |
| Hover | 制御にマウス カーソルを合わせた時の背景画像およびテキスト色。 |
| Active | 制御をクリックした時の背景画像およびテキスト色。 |
| Focused | 制御にキーボードの焦点を合わせた時の背景画像およびテキスト色。 |
| On Normal | 制御が有効化状態にある時の背景画像およびテキスト色。 |
| On Hover | 有効な制御にマウス カーソルを合わせた時の背景画像およびテキスト色。 |
| On Active | 有効な制御をクリックした時のプロパティ。 |
| On Focused | 有効な制御にキーボードの焦点を合わせた時の背景画像およびテキスト色。 |
| Border | 制御の形状のスケールによって影響されない''Background」画像の各側でのピクセル数。 |
| Padding | 制御の各端から制御の内容の開始点までのピクセル単位の空白。 |
| Margin | このスタイルでレンダリングされる要素とその他の GUI 制御間の余白。 |
| Overflow | 背景画像に追加される余分な空白。 |
| Font | このスタイルですべてのテキストに使用されるフォント。 |
| Image Position | 背景画像とテキストの結合のされ方。 |
| Alignment | 標準のテキスト配置オプション。 |
| Word Wrap | 有効にすると、制御の境界に達したテキストが次の行に配置されます。 |
| Text Clipping | ''Word Wrap」が有効の場合、制御の境界を超えたテキストをどのように扱うを選択します。 |
| Overflow | 制御の境界を超えたテキストは境界を超えて表示され続けます。 |
| Clip | 制御の境界を超えたテキストは非表示になります。 |
| Content Offset | その他のプロパティの他、内容が移動される X および Y 軸に沿ったピクセル数。 |
| X | 左 / 右のオフセット。 |
| Y | 上 / 下のオフセット。 |
| Fixed Width | 制御の幅のピクセル数で、与えられた''Rect()」値を無効にします。 |
| Fixed Height | 制御の幅のピクセル数で、与えられた''Rect()」値を無効にします。 |
| Stretch Width | 有効にすると、このスタイルを使用する制御が水平に延長され、レイアウトが改善されます。 |
| Stretch Width | 有効にすると、このスタイルを使用する制御が垂直に延長され、レイアウトが改善されます。 |
詳細
GUI スタイルは、スクリプトから宣言され、インスタンスごとに修正されます。 カスタムのスタイルのある 1 つまたは少数の制御を使用したい場合、スクリプトでこのスタイルを宣言し、制御関数の引数として、スタイルを与えることができます。 これにより、これらの制御を定義したスタイルで表示できます。
まず、スクリプト内から GUIスキンスタイルを宣言する必要があります。
/* Declare a GUI Style */ var customGuiStyle : GUIStyle; ...
GameObject にこのスクリプトを追加すると、Inspector で修正できるカスタムのスタイルが表示されます。

''スクリプトで宣言されたスタイルは、スクリプトの各インスタンスで修正できます」
特定の制御にこのスタイルを使用するよう指示したい場合に、制御関数の最後の引数として、スタイルの名前を与えます。
...
function OnGUI () {
// スタイルを使用するために最後の引数として、スタイルの名前を与えます
GUILayout.Button ("私はカスタム スタイルのボタンです"、customGuiStyle);
// スタイルを適用したくない場合は、名前を渡さないでください
GUILayout.Button ("私はカスタム スタイルなしの通常の UnityGUI ボタンです");
}

コードのサンプルで作成されたように、2 つのボタンで、1 つはスタイルあり
UnityGUI の使用に関する詳細については、GUI Scripting Guide を参照してください。
Page last updated: 2012-11-13comp-Wizards
Page last updated: 2012-11-22wizard-RagdollWizard
Unityでは特別なウィザードがあり、Ragdollをクイックに作成することが出来ます。それぞれ異なる手足パーツをウィザード上の対応するプロパティの上にドラッグするのみです。次に、Createボタンを押してUnityはRagdollを構成し、Colliders、 RigidbodiesおよびJointsを自動作成します。
キャラクターの作成
Skinned MeshesRagdollはSkinned Meshを活用していて、これは3Dモデリングアプリケーションにてリギングを行ったキャラクターメッシュです。このため、RagdollキャラクターはMayaあるいはCinema4Dのようなパッケージで作成する必要があります。キャラクターを作成してリギングを行い、 Project Folderにアセットを保存するようにします。そうすればUnityに切り替えるとキャラクターのアセットファイルが表示されています。そのファイルをクリックして選択する Import SettingsダイアログがInspector内に表示されます。ここでMesh Collidersが無効化されていないことを確認して下さい。
ウィザードの使用
実際のソースアセットからRagdollを作成することは出来ません。ソースアセットファイルそのものを編集しないと実現しないため、出来ないためです。キャラクターアセットからインスタンスを作成し、Ragdollにして、その後はPrefabとして保存して再利用できるようにします。
キャラクターのインスタンスを作成するためにはキャラクターをProject ViewからHierarchy Viewの上にドラッグします。Transform Hierarchyを拡大するためにHierarchyの左にあるインスタンス名の左の小さい矢印をクリックします。これでRagdollパーツを割り当てる準備が出来ました。
Ragdollウィザードを開くためにはメニューでを選択します。これでウィザードが表示されます。

Ragdollウィザード
ウィザードへのパーツ追加は自明な作業です。キャラクターインスタンスの各々の異なるTransformをウィザードの適切なプロパティの上にドラッグします。これはキャラクターを自ら作成していれば簡単な作業です。
をクリックします。これでPlay Mode(再生モード)に入ると、Ragdollとしてモタモタ動くようになることを確認できるようになります。
最後のステップはセットアップしたRagdollをプレハブとして保存することです。メニューでを選択します。Project Viewで新しいプレハブが表示されます。これを"Ragdoll Prefab"と名前を変更します。HiearchyからRagdollキャラクターのインスタンスを"Ragdoll Prefab"の上にドラッグします。これでセットアップは完了し、再利用できるゲームキャラクターは出来ましたので好きなように使うことが出来ます。
Page last updated: 2012-11-22script-Terrain
このセクションではTerrain Engine(地形エンジン)の使用方法について説明します。作成の方法、技術的な詳細、その他の考慮事項をカバーします。次のセクションに分かれています:
Using Terrains
このセクションではTerrainの基本的な情報についてカバーします。これはTerrainの作成方法と新しいTerrainツールおよびブラシの使用方法を含みます。
Height
このセクションでは異なるツールおよびブラシを使用してTerrain(地形)のHeight(高さ)を変更する方法を説明します。
Terrain Textures
このセクションでは異なるブラシを使用して、Terrainテクスチャを追加、ペイント、ブレンドする方法を説明します。
Trees
このセクションではツリーアセットを作成する際に重要な情報を説明します。さらにTerrain上でツリーを追加、ペイントする方法もカバーします。
Grass
このセクションではGrass(草)の仕組みと使用方法を説明します。
Detail Meshes
このセクションでは詳細メッシュ(岩、ワラ、植生)の実践的な使用方法を説明します。
Lightmaps
Unity内臓のLightmapperにより他のどのようなオブジェクトとも同じようにTerrainにライトマップを適用することが出来ます。ヘルプが必要な場合はLightmap クイックスタート を参照のこと。
他の設定
このセクションではTerrainに関するその他全ての設定をカバーします。
モバイル パフォーマンスに関する留意事項
Terrainのレンダリングは相当にコストがかかるため、ローエンドのモバイルデバイスでは実用的ではありません。
Page last updated: 2012-11-26terrain-UsingTerrains
新しいTerrain(地形)の作成
新しいTerrainはメニューでを選択して作成することが出来ます。これでTerrainがProjectおよびHierarchy Viewsに追加されます。

Scene ViewにおけるTerrainは次のように表示されます。

Scene Viewでの新しいTerrain
別の大きさのTerrainを欲しい場合はメニューでを選択します。Terrainの大きさに関して、このダイアログで変更できる設定は複数あります:
Terrainの解像度設定
上図のとおり、変更できる値がいくつかあります。
それらの値は:
- Terrain Width: ユニット単位でのTerrainの幅
- Terrain Height: ユニット単位でのTerrainの高さ
- Terrain Length: ユニット単位でのTerrainの奥行き
- HeightMap Resolution: 選択した TerrainのHeightMap解像度
- Detail Resolution: Grass(草)や詳細Meshを含むマップ。パフォーマンス理由(ドローコール節約)で値は低いほうが良い。
- Control Texture Resolution: Terrainい描画する異なるテクスチャをレイヤー化するSplat Mapの解像度
- Base Texture Resolution: 各々の距離で使用されるSplat Mapを複合したマップの解像度
Terrainでのナビゲーション
Terrainは他のゲームオブジェクトとは若干違う働きをします。Brushesを使用してTerrainをペイントして微調整できます。Terrainを再配置したい場合、Inspectorで`Transform Position``の値を変更できます。これにより、Terrainを移動できますが、回転や拡大は出来ません。
HierarchyでTerrainを選択している間、「F」キーによりTerrain上をナビゲートすることが出来ます。「F」キーを押すと、マウスがどこにあったとしてもシーンビューの真ん中に配置されます。これにより、領域をタッチして別の領域へズームし、別のものを変更することが出来ます。「F」キーを押下しててマウスがTerrainの上を移動してない場合、Terrainはシーンビューでセンタリングされます。
Terrainの編集
Terrainを選択した状態でInspectorをみると、新しくなって素晴らしくなったTerrain編集ツールを見ることが出来ます。

Inspector上のTerrain編集ツール
各々の四角形ボタンは別の機能を持ったTerrainツールです。高さを変更したり、Splat Mapをペイントし、木や岩のような詳細をアタッチする、ということをツールで出来ます。特定のツールを使用するにはクリックします。そうするとツールの短い説明がツールボタンの下に表示されます。
ほとんどのツールはブラシの用途です。複数の異なったブラシがブラシを使用する全てのツールで表示されています。ブラシを使用するにはクリックします。現在選択されているブラシはTerrainの上にマウスオーバーすると指定した大きさでプレビュー表示します。
Scene Viewではこれらのボタン全てを使ってTerrainに直接ペイントできます。単にツールと使用したブラシを選んで、Terrainの上でクリックした後にドラッグしてリアルタイム編集できます。Height、Texture、DecorationをペイントするためにはHierarchy ViewでTerrainを選択状態としている必要があります。
注意: ブラシを選択したらシーンビューでTerrainの上でマウスオーバーして「」キーを押します。これによりシーンビューがセンタリングされ、Brush Sizeの距離に自動ズームします。Terrainを作成しているときにナビゲーションするにはこれがもっとも早くかつ簡単な方法です。
Terrainキーボードショートカット
Terrain Inspectorが有効のとき、早く編集するには次のキーボードショートカットがあります。(全てUnity Preferencesでカスタマイズできます):
- Shift+Q、Shift+Yで有効なTerrainツールを選択します。
- コンマ(,)およびピリオド(.)でアクティブなブラシを切り替えます。
- Shift+コンマ (<) and Shift+ピリオド (>)で有効なツリー/テクスチャ/詳細オブジェクトを切り替えます。
terrain-Height
Terrain編集ツールのどれも使用するのは非常に簡単です。事実上、Scene ViewでTerrainをペイントすることになります。Heightツールおよびその他の場合はツールを選択したうえで、シーンビューのTerrainをクリックししてリアルタイム編集します。
Raising & Lowering Height(地形高さの上下調整)
一番左側にあるのが$$Raise Heightツールです。
.
このツールを使用して、Terrain. のHeightをペイントブラシによる描画で高さの上下を調整します。マウスを1回クリックすると、高さを増加させます。最大の高さに到達するまで、マウスボタンを押しながら、マウスを移動すると、継続的に高さが高くなります。

各々異なる結果を得るために様々なブラシを活用します。

クリックしたときに高さを低くしたい場合は、キーを押したままにします。

注意:ブラシを選択したら、シーンビューTerrainの上にマウスを移動てを押します。これによりシーンビューをマウスポインタ位置にセンタリングし、Brush Size距離にズームインします。これは、Terrainを作成しているときにTerrainをナビゲーションする最速かつ最も簡単な方法です。
Heightをペイントする
左から2番目のツールはPaint Height(地形高さをペイントする)のツールです
.
このツールを使用すると、ターゲットの高さを指定し、その高さに向かってTerrainの任意の部分を移動することができます。地形が目標高さ??に到達すると、その高さで一旦停止します。
目標高さを指定するには、キーを押しつつ、Terrain状で希望の高さをクリックします。また、手動でインスペクタでHeightのスライドバーで調整することができます。

ターゲットの高さは指定したので、Terrainでクリックするとその高さになるまでTerrainが上または下に移動します。

Heightのスムージング
左から3番目のツールがSmoothing Heightです。
.
このツールにより、ペイントしている領域内の任意の高低差を緩和します。他のブラシのように、シーンビューでスムージングしたい領域をペイントします。

Heightmapsの活用
Photoshopで作成したグレースケールのHeigthmap、あるいは現実世界の地理データ、からインポート作業を行い、Terrainに適用することができます。これを実現するためにはメニューからを選択し、開きたいRAWファイルを選択します。その後、いくつかのインポート設定が表示されます。これらは自動設定されますが、選択肢として、このダイアログで土地の大きさを変更するオプションがあります。準備ができたら、ボタンをクリックします。Heightmapを地形に適用すれば、上記のすべてのツールで正常に編集することができます。 Unityのハイトマップ インポーターがグレースケールのファイルしかインポートできないことに留意してください。したがって、RGBチャンネルを使用して生のハイトマップを作成することはできず、グレースケールを使用する必要があります。
Unityは16ビット解像度を活用するRAWファイルに対応しています。Bryce、Terragenの、またはPhotoshopのような他のハイトマップの編集アプリケーションによりフル解像度でUnityのハイトマップを操作することができます。
HeightmapをRAWフォーマットでエクスポートするオプションがあります。メニューでを選択すると、Export Settingsダイアログが表示されます。 好きなように変更を加え、をクリックして新しいHeightmapを保存します。
Unityでは、Terrainを平らにする簡単な方法も提供しています。メニューでを選択します。これによりウィザードで指定したHeightに合わせてTerrainを平らにすることができます。
Page last updated: 2012-11-26terrain-Textures
Terrain Textures(地形テクスチャ)をTerrain全体にタイルとして並べることによりマップを塗りつぶすことが出来る。Terrainテクスチャをブレンドし、組み合わせることで、1つのマップから別のマップへ円滑な移行、あるいは周囲を様々保つためことができます。
TerrainテクスチャはSplat Mapとも呼ばれています。これが意味することは、繰り返し使用する高解像度テクスチャを定義して任意に相互にブレンドし、直接アルファマップを Terrain上にペイントできる、ということです。テクスチャは地形の大きさに比べて大きくないため、テクスチャの配布時におけるファイル容量は非常に小さいです。
注意 テクスチャの量を4の倍数とすることで、Terrainアルファマップのパフォーマンスとファイル容量にとって最大の恩恵を受けます。
テクスチャで作業をするには、Inspector上でPaint Texturesボタンをクリックします。
Terrainテクスチャの追加
Terrainテクスチャでペイントをする前に、プロジェクトフォルダに少なくとも1つのTerrainテクスチャをTerrainに追加します。^^Options Button->Add Texture...を選択します。^ ^。

これによりAdd Terrain Textureダイアログが表示されます。

Add Terrain Textureダイアログ
ここでSplatプロパティにてタイルに使用するテクスチャを選択します。プロジェクトビューからプロパティにテクスチャをドラッグするか、ドロップダウンリストから選択するか、いずれかの方法で出来ます。
次に、Tile Size XおよびTile Size Yプロパティを設定します。数値を大きくするほど、テクスチャの各々のテクスチャのタイルが大きく拡大されます。大きいTile Sizesである場合、テクスチャが全体のTerrainで繰り返される数は少なくなります。数値を小さくするほどより多くの回数だけ、より小さいタイルで多くの回数繰り返すことになります。
ボタンをクリックすると、最初のTerrainがTerrain全体にタイルとして表示されます。

Terrainテクスチャ好きな回数だけこのプロセスを繰り返します。
Terrain Textureをペイントする
少なくとも2つのTerrainテクスチャを追加すれば、様々な方法でそれらをブレンドすることができます。この部分は本当に面白いところなので、最も良いところから早速始めてみましょう。
使用したいTerrainテクスチャを選択します。現在選択されているTerrainテクスチャが青で強調表示されます。

使用したいブラシを選択します。現在選択しているブラシは、青色で強調表示されます。

ブラシのSize、Opacity、Target Strengthを選択します。
Sizeはブラシの大きさで、Terrainのグリッドの四角形の対する相対的な大きさで示します。
Opacityテクスチャの透明度、あるいは一定の時間だけペイントする間に適用されるテクスチャの量です。
Target Strength ペイントし続けることによって、達することができる最大の不透明度です。

Terrain上でクリックし、ドラッグすることで、Terrain Texgtureを描画することが出来ます。

ブクスチャ、ブラシ、サイズ、Opacityで多様な設定にすることで、ブレンドしたスタイルも多様に作成することが出来ます。

Note:ブラシを選択したら、シーンビューのTerrainの上でマウスを移動しを押します。これによりシーンビューをマウスポインタ位置にセンタリングし、Brush Size距離にズームインします。これは、Terrainを作成しているときにTerrainをナビゲーションする最速かつ最も簡単な方法です。
Page last updated: 2012-11-23terrain-Trees
UnityのTerrain(地形)エンジンはTreeを特別にサポートしています。Terrain上に木の数千本を配置し、実用的なフレームレートでゲーム内でそれらをレンダリングすることができます。カメラの近くのTreeはフル3Dでレンダリングし、遠くのTreeを2Dビルボードに移行してレンダリングすることで実現されます。遠いビルボードは、様々な角度から見らても正しい向きに更新されます。この移行システムは、詳細なTree環境をパフォーマンス上シンプルに処理できるようにします。Meshからビルボードへの移行のパラメータを微調整することについて、完全に制御することが可能であり、最高のパフォーマンスを得ることが出来ます。

このように、多くのTreeと美しい環境を、簡単に描くことができます。
Treeの追加
InspectorからPlace Treesボタン
を選択します。
Treeを地形に追加する前に、使用可能なTreeのライブラリを追加しなければなりません。 をクリックします。 Add Treeダイアログが表示されます。

Add Treeダイアログ
TreeをProject Viewーから選択し、Tree変数にドラッグします。また、Bend Factorを編集すると、"風になびく"アニメーション効果を樹木に追加できます。準備ができたらをクリックします。Treeが、インスペクタに選択された状態で現れます。

新しく追加されたTreeが、インスペクタに選択された状態で現れます。
好きなだけTreeを追加できます。それぞれのTreeは、Terrainに追加するために、インスペクタで選択できます。

選択されたTreeは青色にハイライトされます。
Treeのペイント(木の描画)
Place Trees(木の配置)ツールを使っているとき、地形をクリックするとTreeを配置できます。木を取り除くには、ボタンを押しながら、Terrainをクリックします。

Treeのペイント(木の描画)は、ペイントブラシを使うのと同様、簡単です。
Treeを置く場合に、オプションが幾つかあります。
| Brush Size | Tree配置ブラシの半径(メートル) |
| Tree Spacing | Treeの間の距離のパーセンテージ |
| Color Variation | 各Treeの間の色のバリエーション |
| Tree Height | アセットと比べたTreeの高さ調整 |
| Height Variation | 各Treeの高さのバリエーション |
| Tree Width | アセットと比べたTreeの幅調整 |
| Width Variation | 各Treeの幅のバリエーション |
Treeのペイント(木の描画)のヒント

異なるブラシは、異なるエリアサイズをカバーします

Tree Spacingを調整し、Treeを描画する時の、密度を調整します。
Treeの編集
追加したTreeのインポートパラメータを変更するには、詳細(Detail)を選択し、をクリックしますまたは、編集したい樹木をダブルクリックします。すると、Edit Treeダイアログが表示され、設定を変更することができます。
大量に配置する場合
Treeを描画して配置するのではなく、森全体を生成したい場合は、を使用します。Mass Place Treesダイアログが表示されます。配置したいTreeの数を設定すると、すぐにそれらが配置されます。Terrainに追加されたTree全てが、大量の配置に使用されます。

10,000本のTreeを一度に配置
ソースアセットのリフレッシュ
Treeアセットのソースファイルを更新した場合、手動でTerrainにインポートしなければなりません。 を使用して、行います。 ソースアセットを変更して、保存した後、これをすることで、Terrain上のTreeを即座にリフレッシュします。
Treeの作成
木はTerrainエンジンにより2つの方法で作成することができます:
最初の方法はUnity内臓のTree creator を作成する方法で、二つめの方法はUnityが対応しているサードパーティのモデリングソフトを使用する方法で、この場合全てのTreeは全ての樹木は、1つのメッシュと2つのMaterialsを含みます。 1つは幹で、もうひとつは葉です。
パフォーマンスの問題から、平均的なTreeでは、三角形の数は2000以下に抑えるべきです。少ない三角形の数である程よいでしょう。TreeメッシュのピボットポイントはTreeの根元でなければなりません。その位置で、Treeは地面の上に配置されることになります。こうすることで、他のモデリングアプリケーションからUnityへのインポートが容易になります。
TreeはNature/Soft Occlusion LeavesおよびNature/Soft Occlusion Bark shaderを使用する必要があります。これらのシェーダを使うために、Treeは"Ambient-Occlusion"の名前を含む、特別なフォルダに配置しなければなりません。モデルをフォルダに配置したり、再インポートしたりすると、UnityはTreeに特化された、Soft Ambient Occlusionを計算します。 "Nature/Soft Occlusion"シェーダには、この情報が必要です。もし、命名ルールに従わなければ、樹木は真っ黒な変なものに見えてしまうでしょう。
Unityはまた、いくつかの高品質のTreeを"Terrain Demo.unitypackage"に含んでいます。ゲームの中で、これらを使用することができます。
ローポリのTreeを使用

1つの枝と葉が、たった6枚の三角形で作られ、しっかりと曲がって見えます。もっと曲面を表現するために、三角形を追加できます。しかし、重要な点は、Treeを作るとき、四角形ではなく、三角形を使うことです。もし、四角形を使うと、枝の湾曲を表現するために、三角形を使う場合の2倍の量が必要になります。
Treeは、アルファで見えない部分がほとんどの、大きなポリゴンを持っており、塗りつぶし時間を無断にします。パフォーマンスの観点から、避けられるべきで、もちろん、密集したTreeを作ることが目的です。 Oblivionの木が、非常に良く見えるのもこのためです。葉の間から向こうが見えないほど密集しています。
Treeとの衝突のセットアップ
TreeでColliderを活用するのはとても簡単です。Treeアセットファイルをインポートしたとき、しなければならないことはシーンビューにインスタンス化し、Capsule Collider(カプセルコライダ)を追加し、ゲームオブジェクトを新たなプレハブにします。これで、TreeをTerrainに追加するとき、カプセルコライダを持った、Treeのプレハブが追加されます。Treeとの衝突についてはCapsule Colliderのみ使用できます。
Treeと衝突させる

インスペクタ上のTerrain Collider
TreeをRigidbodyと衝突するようにしたい場合 Create Tree Collidersをオンにするようにして下さい、しない場合はオブジェクトがTreeを通り抜けます。Unityが使用するのPhysXエンジンがひとつのシーンで使用できるColliderは65536が最大値であることに注意してください。あなたが、それ以上の数のTree(すでにシーンで使用されている他のコライダーも引く)を使用する場合は、Tree Colliderがエラーで失敗します。
Page last updated: 2012-11-26terrain-Grass
Paint Foliage button
により、草や岩、または地形周辺のその他の黄色をペイントできます。 草をペイントするには、 を選択します。 草にメッシュを作成する必要はありません。ただのテクスチャです。

Add Grass Texture ダイアログ
このダイアログでは、次のオプションを使用して、草の外観を微調整できます。
| Detail Texture | 草に使用するメッシュ。 |
| Min Width | 各草セクションの最小幅 (メートル)。 |
| Max Width | 各草セクションの最大幅 (メートル)。 |
| Min Height | 各草セクションの最小高さ (メートル)。 |
| Max Height | 各草セクションの最大高さ (メートル)。 |
| Noise Spread | 草のノイズ生成クラスタのサイズ。 数値が低いほど、ノイズが減ります。 |
| Healthy Color | 健康な草の色。Noise Spreadクラスタの中心ほど顕著になります。 |
| Dry Color | 乾燥した草の色。Noise Spreadクラスタの中心ほど顕著になります。 |
| Grayscale Lighting | 有効にすると、草テクスチャには、地形で輝く色つきライトで色付けられません。 |
| Lightmap Factor | 草に対するライトマップの影響の度合い。 |
| Billboard | 有効にすると、この草が常にメインの Camera を向くように回転します。 |
ボタンクリック後、Inspector に選択できる草が表示されます。

インスペクタで選択された新しく追加された草が表示されます
草のペイント
草のペイントは、テクスチャや木のペイントと同じです。 ペイントしたい草を選択し、Scene View で地形に適切にペイントします。

草のペイントは非常に簡単です
注意:ブラシを選択したら、シーン ビューでの地形にマウスを併せて、 を押します。 これは、マウス ポインタの位置でシーン ビューを中心に配置し、自動的にBrush Size距離に拡大されます。 これは、地形を作成しながら、地形周辺でナビゲートする最も素早く、簡単な方法です。
草の編集
細草テクスチャへのインポート パラメータを変更するには、草テクスチャを選択し、 を選択します。 草テクスチャをダブルクリックすることもできます。 Edit Grass ダイアログが表示され、上記のパラメータを調整できます。
幾つかのパラメータを変えると、世界に違いが生じます。 Max/Min WidthとHeightパラメータを変えると、草の見え方が大幅に変わりますが、地形にペイントされた草オブジェクトの数は同じです。

デフォルトのパラメータで作成した草

同じ数のペイントされた草ですが、今度は幅と高さが増しています
terrain-DetailMeshes
TreeやGrassでない、地形の装飾はDetail Mesh(詳細メッシュ)として作成されるべきです。岩や3Dの茂み、その他の静的な物に最適です。 追加するにはPaint Foliageボタンを使用します。
次にを選択します。Add Detail Mesh(詳細メッシュの追加)ダイアログが表示されます。

Add Detail Mesh(詳細メッシュの追加)ダイアログ
| Detail | 詳細に使用されるメッシュ |
| Noise Spread | Detailにノイズより生成されるクラスタの大きさ。小さいほどノイズが少ない。 |
| Random Width | 全ての詳細オブジェクトの間の幅のバラツキの制限 |
| Random Height | 全ての詳細オブジェクトの高さのバラツキの制限 |
| Healthy Color | 健康的な詳細オブジェクトの色。Noise Spreadクラスタの中心で目立ちます。 |
| Dry Color | 乾燥した詳細オブジェクトの色。Noise Spreadクラスタの外の縁で目立ちます。 |
| Grayscale Lighting | 有効な場合、詳細オブジェクトは、Terrain上を照らす光の色に影響されません。 |
| Lightmap Factor | 詳細オブジェクトが、ライトマップにより影響を受けるか |
| Render Mode | Grassライティングと通常の頂点ライティングのどちらで、この種類の詳細オブジェクトを照らすかを設定します。岩のような詳細オブジェクトは頂点ライティングを使用すべきです。 |
ボタンをクリックすると、詳細メッシュがInspectorに表示されます。詳細メッシュとGrassは隣り合っています。

追加された詳細メッシュが、インスペクタで、Grassオブジェクトの隣に表示された
詳細メッシュのペイント(Painting Detail Meshes)
詳細メッシュのペイントは、テクスチャ、Tree、Grassと同様です。描画したいDetail(詳細)を選択して、シーンビューの地形に描画します。

詳細メッシュのペイントはとてもシンプルです。
注意: ブラシが選択された状態で、マウスをシーンビュー内のTerrain上で動かし、キーを押して下さい。これによりシーンビューがマウスポインタの位置にセンタリングされブラシの大きさ(Brush Size)に合わせて、自動的にズームします。これにより、編集中に最も素早く簡単に地形をナビゲーションする方法です。
詳細オブジェクトの編集(Editing Details)
詳細メッシュのインポートパラメータを変更するには、それを選択してをクリックします。詳細メッシュの編集(Edit Detail Mesh)ダイアログが表示され、上記のパラメータを変更できます。
ソースアセットのリフレッシュ(Refreshing Source Assets)
詳細メッシュアセットのソースファイルを更新した場合、Terrainに手動でインポートする必要があります。このためにはを使用します。これはソースアセットを変更して保存した後に行われ、地形上の詳細メッシュをただちににリフレッシュします。
ヒント:
- 詳細メッシュオブジェクトのUVは0-1の範囲とする必要があり、なぜなら詳細メッシュで使用されている全ての各々のテクスチャはひとつのテクスチャアトラスにパックされているためです。
Terrain Lightmapping
このセクションではTerrain Engine(地形エンジン)の使用方法について説明します。作成の方法、技術的な詳細、その他の考慮事項をカバーします。次のセクションに分かれています:
Using Terrains
このセクションではTerrainの基本的な情報についてカバーします。これはTerrainの作成方法と新しいTerrainツールおよびブラシの使用方法を含みます。
Height
このセクションでは異なるツールおよびブラシを使用してTerrain(地形)のHeight(高さ)を変更する方法を説明します。
Terrain Textures
このセクションでは異なるブラシを使用して、Terrainテクスチャを追加、ペイント、ブレンドする方法を説明します。
Trees
このセクションではツリーアセットを作成する際に重要な情報を説明します。さらにTerrain上でツリーを追加、ペイントする方法もカバーします。
Grass
このセクションではGrass(草)の仕組みと使用方法を説明します。
Detail Meshes
このセクションでは詳細メッシュ(岩、ワラ、植生)の実践的な使用方法を説明します。
Lightmaps
Unity内臓のLightmapperにより他のどのようなオブジェクトとも同じようにTerrainにライトマップを適用することが出来ます。ヘルプが必要な場合はLightmap クイックスタート を参照のこと。
他の設定
このセクションではTerrainに関するその他全ての設定をカバーします。
モバイル パフォーマンスに関する留意事項
Terrainのレンダリングは相当にコストがかかるため、ローエンドのモバイルデバイスでは実用的ではありません。
Page last updated: 2012-11-26terrain-OtherSettings
Terrainインスペクタではボタンの下に、
いくつかのオプションが有ります。

'全ての追加のTerrain設定''
Base Terrain
- Pixel Error: Terrain形状表示の際に許容されるエラーの数を制御します。これは本質的にLOD形状の設定にあたり、値が大きいほどTerrain形状は密集度が低くなります。
- Base Map Dist.:Terrainテクスチャが、高解像度で表示される距離。この距離より遠い場合は低解像度のテクスチャが表示されます。
- Cast Shadows: 地形に影を投影するかどうか指定。
マテリアル(Material)スロットにTerrainのカスタムマテリアルを割り当てることが出来ます。このマテリアルはTerrainのレンダリングが出来るshader を使用すべきであり、例えばNature/Terrain/Diffuse (マテリアルの割り当てがない場合に使用されるシェーダ)あるいはNature/Terrain/Bumped Specularです。
Tree & Detail Settings
- Draw: 有効な場合、Tree、Grass、詳細メッシュが描かれます。
- Detail Distance: 詳細オブジェクトが表示されなくなるカメラからの距離。
- Tree Distance: Treeが表示されなくなるカメラからの距離。値が大きければ、遠くのTreeが表示されます。
- Billboard Start: Treeがメッシュではなくビルボードとして表示されはじめる距離。
- Fade Length: Treeが、ビルボードの向きからメッシュの向きに変わるときの、総距離の差。
- Max Mesh Trees: Terrainに許容されるメッシュTreeの総数。
Wind Settings
- Speed: 風がGrassを吹き抜ける速度
- Size: 風で一度に影響を受けるGrassの領域
- Bending: 風によって曲げられるGrassの量
- Grass Tint: Grassと詳細メッシュの全体的な色合いの量
class-Tree
樹木作成ツール(Tree Creator)により樹木の作成、編集を順を追って作成できる。結果として作成される樹木は通常のゲームオブジェクト(GameObjects)あるいは地形エンジンに統合することが出来ます。このツールを使用することで素晴らしく見栄えの良い樹木を素早く作成し、いくつかのデザインツールを用いて微調整しながらゲームでの見栄えを完璧なものに仕上げることが出来ます。樹木作成ツール(Tree Creator)は森林やジャングルなど異なる種類の木が必要な時に非常に便利です。
初めて樹木を作成する(Building Your First Tree)
このセクションでは初めての樹木の作成をステップバイステップでウォークスルーします。
樹木の作成(Tree Creator Structure)
このセクションで樹木作成ツールのユーザインタフェースの概要を提供します。
枝(Branches)
このセクションでは枝(Branch)に具体的なプロパティに焦点を当てて説明します。
葉(Leaves)
このセクションでは葉(Leaves)に具体的なプロパティに焦点を当てて説明します。
風(Wind Zones)
このセクションでは風(Wind Zones)と樹木に反映する方法を説明します。
地形エンジンでの樹木(Trees in the Terrain Engine)
このセクションでは地形エンジンガイドの中で樹木を地形エンジンに統合するうえでの基本についてカバーします。
Page last updated: 2012-11-25tree-FirstTree
UnityのTree クリエーターの使用手順を追って説明します。最初に、プロジェクトにTree クリエータ パッケージが含まれていることを確認してください。なかった場合、メニューからと選択し、Unityのインストールフォルダに移動し、Standard Packagesフォルダを開きます。プロジェクトに必要なアセットを追加するためにTree Creator.unityPackageパッケージを選択します。
新たなTree(木)の追加
新しいTreeアセットの作成にはメニューから を選択します。

新しいTreeアセットがプロジェクトビューで作成され、現在開いているシーンでインスタンス化され表示されます。出来上がった新しいTreeは単一のBranchだけのシンプルなものなので、もっと特徴あるもにしていきます。
Branch(枝)の追加
添付:TreeCreator-BasicBranch.png \ シーンにある真新しい木
Inspector上のTree クリエーターを で表示したいTreeを選択します。Treeのインターフェイスを通して、Treeの変形、整形に必要なすべてのツールが提供されます。Tree階層上で2つのノードが表示されます(Tree ルートノードと、単一のBranch Groupノード)
Tree階層で、木の幹にあたるBranch Groupを選択します。Add Branch Groupボタンをクリックすると、メインのBranchにつながった新たなBranch Groupが表示されます。今度は、Branch Groupのプロパティを調整してトランクにつながったBranchのバリエーションをみることが出来ます。

トランク(幹)にBranch(枝)を追加
トランクに接続されているBranchを作成した後、さらにBranch Groupノードに追加していくことで、より小さなBranch(小枝)を作成できます。二つめのBranch Groupを選択してAdd Branch Groupボタンをクリックします。このグループのプロパティ値を調整して。二つめのBranchにより多くのBranchが作成されるようにします。

二つめのBranchにさらにBranchを追加
これによりBranchの構造が出来上がります。ゲームの想定が冬でないものとして、次はリーフ(葉)を追加します。
リーフ(葉)の追加
Leaf Groupの追加によりTreeをリーフ(葉)で飾ります。リーフの仕組みはここまでみてきたBranch Groupと同じです。二つめのBranch Group ノードを選択してAdd Leaf Groupボタンをクリックします。さらに凝るのなら、Treeの上にほんの小さなBranchにもLeaf Groupを追加することができます。

二つめのBranch、ほんの小さなBranchに追加されたリーフ
この時点ではリーフは透明でない平面としてレンダリングされます。これは、Materialを追加する前にリーフのプロパティ値(大きさ、位置、回転など)を調整するためです。思いどおりの設定になるまで、リーフの値は微調整していきます。
Materialの追加
Treeのリアリティを増すためには、BranchやリーフのMaterialを適用する必要があります。新しいMaterialを作成するには、メニューでを選択しますMy Tree Barkに名前変更し、シェーダのドロップダウンからを選択します。ここから先では、Tree クリエータ パッケージにて提供されているTextureをベースマップ、法線マップ、およびBarkMaterialのGlossプロパティに割り当てることができます。ベースマップとグロスマップのプロパティとしては "BigTree_bark_diffuse"を、法線プロパティのとしては"BigTree_bark_normal"を使用することをお勧めします。
リーフのMaterial作成するも同じ手順に従います。新しいMaterialを作成し、シェーダーをと割り当てます。Tree クリエータのパッケージからのTextureのスロットにリーフのTextureを割り当てます。

リーフのためのMaterial
両方のMaterialが作成されたとき、Treeの異なるグループノードに割り当てることができます。Treeを選択し、任意のBranchまたはリーフノードをクリックして、Branch Groupのプロパティの$$Geometryセクションを展開します。すでに選択したノードの種類のMaterialの割り当てスロットが表示されます。作成したMaterialを割り当て、結果を確認します。

リーフMaterialの設定
Treeを仕上げるために、すべてのBranchにLeaf Groupノードをあなたの材料を割り当てて下さい。これであなたはゲームに最初のTreeを入れる準備が整いました!

MaterialをつけたリーフやBranch
ヒント
- Treeの作成は、試行錯誤のプロセスです。
- あまりにも多くのリーフ/Branchを作成しすぎないでください、ゲームのパフォーマンスに影響を与えます。
- カスタムリーフを作成するためのガイドにはアルファマップを参照下さい。
tree-Structure
ツリー クリエータのインスペクタは、 階層、編集ツールおよびプロパティの 3 つのペインに分割されます。
階層
階層ビューでは、木の作成を開始できます。 これには、木の略図が表示され、各ボックスはノードのグループになります。 階層でグループのいずれかを選択することで、そのプロパティを編集できます。 また、階層の下のツールバーにあるボタンのいずれかをクリックすることで、グループを追加または削除できます。

(:コメント画像ソース Editor-Breakdown.psd :) この階層は、1 つの幹と、25 の子枝のある木を表しています。 子の枝には、合計で 70 の葉状体が追加されており、枝には 280 枚の葉と、25 の葉状体が追加されています。 最後のグループを表すノードが選択されます。 また幹には 1 つのタイプの 25 枚の葉と、別のタイプの葉が 15 マイアリ、最後のグループは隠されます。
| Tree Stats | 木のステータス情報で、木の持つ頂点、三角形およびマテリアルの数を表示します。 |
| Delete Node | 階層で現在選択されているグループ、シーン ビューにあるノードまたはスプライン点を削除します。 |
| Copy Node | 現在選択されているグループをコピーします。 |
| Add Branch | 現在選択されているグループ ノードに枝グループ ノードを追加します。 |
| Add Leaf | 現在選択されているグループ ノードに葉グループ ノードを追加します。 |
| External Reload | 木全体を再計算します。ソース マテリアル変更時または葉のメッシュGeometry Modeで使用されているメッシュの変更に使用する必要があります。 |
木階層内のノードは、木自体の要素のグループ、つまり、枝、葉、葉状体を表しています。 次の 5 種類のノードがあります。
ルート ノード:

これは木の始点です。 これは、画質、木を増やすための種、周辺オクルージョンおよび マテリアル プロパティなどのグローバル パラメータを決定します。
枝ノード
ルート ノードに追加された最初の枝グループが幹を作成します。 次の枝 ノードは子枝を作成します。 各種形状、成長および破壊パラメータをこのタイプのノードに設定できます。
葉ノード
葉をルート ノード (例: 小さい茂みなど) または枝ノードに追加できます。 葉は、最後のノードで、その他のノードはそこには追加できません。 各種形状、成長および破壊パラメータをこのタイプのノードに設定できます。
葉状体ノード
枝ノードと同じ動作ですが、形状プロパティの一部が無効で、葉状体固有のプロパティが追加になります。
枝+葉状体ノード
この種のノードは、枝と葉状体の組み合わせで、両方のタイプのプロパティにアクセスできます。
ノード パラメータ
- 各ノードの右上の数字は、このノードが木で作成した要素数を表しています。 この値は、Distribution タブからの頻度パラメータに関連しています。
- ノードは表示 (
) または非表示 (
) にできます。
- 手動でノードが編集されている場合 (シーン ビューで操作されている枝のスプラインや葉)、ノード上で警告が表示されます (
)。 この例では、一部の手続き的プロパティが無効になります。
ツールの編集

ツリー クリエータは、手続き的要素と連携しますが、希望の要素の正確な配置と形状を達成するため、いつでも手動でこれらを編集するよう決定できます。
手動でグループを編集したら、一定の手続き的プロパティは利用できなくなります。 しかし、 に表示されるボタンをクリックすることで、常に手続き的グループに戻すことができます。
| Move | シーン ビューでノードまたはスプライン点を選択できます。 ノードをドラッグすると、これをその親に沿っておよびその周辺で移動できます。 スプライン点は、法線移動ハンドルを使用して移動できます。 |
| Rotate | シーン ビューでノードまたはスプライン点を選択できます。 両方共法線移動ハンドルを表示します。 |
| Free Hand Drawing | スプライン点をクリックして、マウスをドラッグし、新しい形状を描画します。 マウスのボタンを離すと、描画が終了します。 描画は常に、表示方向い垂直な面で行われます。 |
木のグローバル プロパティ
木にはすべて、グローバル プロパティを含むルート ノードがあります。 これは、一番複雑でないグループ タイプですが、木全体のレンダリングと生成をコントロールする重要なプロパティがいくつか含まれます。
配布
ルート ノードに接続された枝ノードの頻度が 1 より大きい場合、Tree Seedを変更し、木を増やしたり、Area Spreadを調整して、木のグループを作成できます。

| Tree Seed | 木全体に影響するグローバル シード。 木の一般構造を維持しつつ、ランダム化するのに使用します。 |
| Area Spread | 幹ノードのスプレッドを調整します。 複数の幹がある場合にのみ効果があります。 |
| Ground Offset | Y 軸で幹ノードのオフセットを調整します。 |
ジオメトリ
これにより、木ジオメトリの質全体を設定し、周囲オクルージョンをコントロールできます。

| LOD Quality | 木全体のレベルの細部を定義します。 値が低いと、木が複雑さが減り、値が高いと、木がより複雑になります。 階層 ビューで統計をチェックし、メッシュの現在の複雑さを確認します。 木のタイプと対象のプラットフォームに応じて、このプロパティを調整して、ポリゴン バジェット内で木が収まるようにする必要がある場合があります。 芸術的なアセットを作成する際に注意すれば、比較的少ないポリゴンでよい見た目の木を生成できます。 |
| Ambient Occlusion | 周囲オクルージョンをオンまたはオフに切り替えます。 木プロパティ編集時、周囲オクルージョンは常に非表示となり、スライダの場合など、変更を終了するまで再コンパイルされません。 周囲オクルージョンは、木の画質を大幅に改善できますが、その計算には時間がかかるため、木の形状に満足するまで、無効にしたい場合もあるでしょう。. |
| Occlusion Intensity | 周囲オクルージョンの密度を調整します。 値が高いほど、効果は暗くなります。 |
マテリアル
木のグローバル マテリアル プロパティをコントロールします。

透光性は、マテリアル プロパティでコントロールできる効果の 1 つです。 このプロパティは、透光性のある、つまり光を投下させますが、途中で拡散させる葉に効果をもたらします。
| Translucency Color | 葉にバックライトが適用される際に、乗算される色。 |
| Translucency View Dependency | 完全にビューに依存している透光性は、ビュー方向とライトの方向間の角度に対して相対的になります。 独立したビューは、葉の法線ベクトルとライトの方向間の角度に対して相対的になります。 |
| Alpha Cutoff | アルファ カットオフよりも小さいベース テクスチャからのアルファ値はカットアウト作成時に省略されます。 |
| Shadow Strength | 葉に映る影を円滑にします。 注意: 葉が受けるすべての影をスケールするため、山の影を受けている木に使用する場合は注意してください。 |
| Shadow Offset | ソース マテリアルで設定された Shadow Offset テクスチャからの値をスケールします。 葉が 1 つの正方形状に配置されないように見えるよう、シャドウのコントロール時に葉の位置を弱めるのに使用されます。 特にビルボード化された葉に重要で、テクスチャの中心では値がより明るい値になり、境界ではより暗い値となります。 黒いテクスチャで始め、葉ごとに異なる灰色の影を追加します。 |
| Shadow Caster Resolution | ソース デフューズ テクスチャからアルファ チャンネルの値を含むテクスチャ アトラスの解像度を定義します。 葉をシャドウ キャスタとしてレンダリングする際にアトラスが使用されます。 より低い解像度を使用すると、パフォーマンスが向上する場合があります。 |
Branches
本項では、特定の Branch Group Properties について主に説明します。
Leaves
本項では、特定の Leaf Group Properties について主に説明します。
Page last updated: 2012-11-13tree-Branches
Branch Group Nodeは枝や葉を生成する役割があります。枝、葉や枝+葉ノードを選択した場合、そのプロパティが表示されます。
枝の分布(Distribution)
グループ内の枝の数と配置を調整します。カーブを使用して位置、回転、スケールを微調整して下さい。カーブは、親の枝、幹の場合は面積に対して相対的に作成されます。

| Group Seed | 枝(Branch Group)のシード。プロシージャルな自動生成(乱数)のバリエーションのために変更します。 |
| Frequency | 各々の親に対する枝の数を調整。 |
| Distribution | 親に対して配置する枝の分布。 |
| Twirl | 一ステップの輪生ノード数、輪生(Whorled)分布の場合。本物の植物の場合、通常はフィボナッチ数である。 |
| Growth Scale | 親ノードに沿ってノードのスケールを定義。カーブとスライダを使用してエフェクトをフェードイン、フェードアウトします。 |
| Growth Angle | 親ノードに対する成長の初期角度を指定。カーブとスライダを使用してエフェクトをフェードイン、フェードアウトします。 |
!枝の形状(Geometry) 枝(Branch Group)に使用される形状と、どのマテリアルが適用されるかを選択します。樹木の品質(LOD Quality)に対してLOD Multiplierを使って調整することができます``。

| LOD Multiplier | 樹木の品質(LOD Quality)に対して品質を調整し、樹木全体に対して品質を高くしたり低くなるようにします。 |
| Geometry Mode | 枝(Branch Group)の形状。枝のみ(Branch Only)、枝+葉(Branch + Fronds)、葉のみ(Fronds Only)。 |
| Branch Material | ブランチの主なマテリアル。 |
| Break Material | 折れた枝の先端のマテリアル。 |
| Frond Material | 葉のマテリアル。 |
枝の形(Shape)
枝の形や成長を調節します。カーブを使用して形の微調整を行います、すべてのカーブは枝自体を基準にしています。

| Length | 枝の長さを調整。 |
| Relative Length | 枝の半径がその長さに影響されるかどうかを決定 |
| Radius | 枝の半径を調整、カーブを使用して枝の長さに対する半径を微調整します。 |
| Cap Smoothing | 枝の先端の丸さを定義。サボテンに便利です。 |
| Growth | 枝の成長を調整します。 |
|---|---|
| Crinkliness | どれだけ曲がった/縮れたにするか、カーブを使用して微調整します。 |
| Seek Sun | カーブを使用して枝が上向き/下向に曲がるか調整し、スライダを使用してスケールを変更します。 |
| Surface Noise | 枝の表面ノイズを調整します。 |
| Noise | 全体的なノイズ要素の指定、カーブを使用して微調整します。 |
| Noise Scale U | 枝の周りのノイズのスケール、値が小さいほどグラグラした外観となり、値が高いほど正規分布のような外観となる。 |
| Noise Scale V | 枝に沿ったノイズのスケール、値が小さいほどグラグラした外観となり、値が高いほど正規分布のような外観となる。 |
| Flare幹のフレアを定義します。 | |
| Flare Radius | フレアの半径を幹の半径に加算します。ゼロの場合はフレアなしです。 |
| Flare Height | フレアが始まる幹の高さを定義 |
| Flare Noise | フレアのノイズを定義、値が小さいほどグラグラした外観となり、値が高いほど正規分布のような外観となる。 |

これらのプロパティは幹でなく(子にあたる)枝で使用します。
| Welding | 親枝に対する枝の溶けこみを定義します。二次的な枝に対してのみ有効です。 |
|---|---|
| Weld Length | 溶けこみの広がりが始まる枝の高さを定義します。 |
| Spread Top | 親の枝に対して枝の上のほうの溶け込みの広がり(spread)要素。ゼロの場合は広がり(spread)なしです。ゼロは広がり(spread)なしを意味します。 |
| Spread Bottom | 親の枝に対して枝の下のほうの溶け込みの広がり(spread)要素。ゼロは広がり(spread)なしを意味します。 |
折れた枝(Breaking)
枝の折れ方を制御します。

| Break Chance | 枝の折れる確率、すなわち0で折れた枝なし、0.5で半分が折れた枝、1.0で全てが折れた枝になります。 |
| Break Location | 折れた枝の範囲を指定。枝の長さを基準とします。 |
葉(Fronds)
葉の数とそのプロパティを調整することができます。あなたがGeometryタブでFrondsを有効にしている場合のみ表示されます。

| Frond Count | 枝ごとの葉の数を定義。葉は常に均等に枝の周りに配置されます。 |
| Frond Width | 葉の幅、カーブを使用して枝に沿って形を調整します。 |
| Frond Range | 葉の開始点と終了点を定義します。 |
| Frond Rotation | 親の枝のまわりの回転を定義します。 |
| Frond Crease | 葉のしわ/折れを調整します。 |
アニメーション
枝(Branch Group)をアニメーションするために使用されるパラメータを調整します。風(Wind Zone)は再生時のみ有効です。

| Main Wind | 一次的な風のエフェクト。緩やかに揺れるモーションが作成され、通常、一次的な枝で必要な唯一のパラメータです。 |
| Main Turbulence | 二次的乱流(Turbulence)効果。枝ごとに個々の、正規分布のモーションをより多く生みます。典型的には、シダやヤシの木など、枝に葉が茂る木で使用します。 |
| Edge Turbulence | 葉のエッジに沿った乱流(Turbulence)。シダ、ヤシの木などで役立ちます |
| Create Wind Zone | 風(Wind Zone )を作成します。 |
tree-Leaves
Leaf Groupは、葉の形状を生成します。プリミティブから、ユーザ定義されたメッシュのいずれかから生成します。
葉の分布(Distribution)
グループ内の葉の数と配置を調整します。カーブを使用して位置、回転、スケールを微調整して下さい。カーブは、親の枝に対して定義されます。

| Group Seed | 葉(Leaf Group)のシード。プロシージャルな自動生成(乱数)のバリエーションのために変更します。 |
| Frequency | 各々の親の枝に対する葉の数を調整。 |
| Distribution | 葉が親に沿って分布している方法を選択します。 |
| Twirl | 親の枝の周りに回転させます。 |
| Whorled Step | 一ステップの輪生ノード数、輪生(Whorled)分布の場合。本物の植物の場合、通常はフィボナッチ数である。 |
| Growth Scale | 親ノードに沿ってノードのスケールを定義。カーブとスライダを使用してエフェクトをフェードイン、フェードアウトします。 |
| Growth Angle | 親ノードに対する成長の初期角度を指定。カーブとスライダを使用してエフェクトをフェードイン、フェードアウトします。 |
葉の形状(Geometry)
葉(Leaf Group)に使用される形状と、どのマテリアルが適用されるかを選択します。カスタムメッシュを使用している場合は、その材料が使用されます。

| Geometry Mode | 葉(Leaf Group)の形状。Meshオプションを選択してカスタム定義のメッシュを使用でき、花、果実などに最適です。 |
| Material | 葉のために使用されるマテリアル。 |
| Mesh | 葉に使用するメッシュ。 |
葉の形(Shape)
葉の形や成長を調節します。

| Size | 葉の大きさを調整し、つまみ幅で最小と最大の範囲もを調整できます。 |
| Perpendicular Align | 葉が親の枝に対して垂直に整列するか調整します。 |
| Horizontal Align | 葉が水平に整列するか調整します。 |
アニメーション
葉(Leaf Group)をアニメーションするために使用されるパラメータを調整します。風(Wind Zone)は再生時のみ有効です。Main WindとMain Turbulenceの値が大きすぎると葉が枝から離れて浮くことがあります。

| Main Wind | 一次的な風のエフェクト。通常、これは親の枝から離れて浮いてしまう葉を避けるため、低い値に保つ必要があります。 |
| Main Turbulence | 二次的乱流(Turbulence)効果。葉の場合、通常は、低い値に保つ必要があります。 |
| Edge Turbulence | 風の乱流(Turbulence)が、葉のエッジに沿ってどれぐらい発生するか定義します。 |
class-WindZone
Wind Zone は、風が吹いた場合に、枝や葉を揺らすことで、よりリアルな描写を追加します。

左側が Spherical Wind Zone、右側が Directional Wind Zone。
プロパティ
| Mode | |
| Spherical | Wind Zone は、半径内でのみ効果を発揮し、中心から端に行くに連れ、減衰します。 |
| Directional | Wind Zone が、シーン全体に 1 方向に影響します。 |
| Radius | Spherical Wind Zone の半径 (モードを Spherical に設定した場合にのみ有効になります)。 |
| Wind Main | 主な風力。 ゆっくりと変化する風圧を生じます。 |
| Turbulence | 気流の風力。 急激に変化する風圧を生じます。 |
| Pulse Magnitude | 時間の経過と共に変化する度合いを定義します。 |
| Pulse Frequency | 時間の経過と共に変化する頻度を定義します。 |
詳細
Wind Zone は、葉や枝をアニメート化するために、ツリー クリエータだけが使用します。 これは、ツリーをより自然に表示し、ゲーム内の力 (爆発) を木と相互に作用しているかのように見せるのに便利です。 木がどのように機能するかの詳細については、tree class page を参照してください。
Unity での Wind Zone の使用
Unityでの Wind Zone の使用は非常に簡単です。
最初に新しい wind zone を作成するには、 をクリックします。
(タイプによって) Wind Zone を tree creator 近くに配置し、木とどのように相互作用するかを確認します。
注意: Wind Zone が Spherical の場合、風を吹かせたい木を球体の半径内に置く必要があります。 Wind Zone に方向がある場合、シーン内のどこに置くかは重要ではありません。
ヒント
- ゆっくりと変化する通常の風を生成するには、
- 方向のある Wind Zone を作成します。
- 風をどの程度強くするかに応じて、Wind Main を 1.0 以下に設定します。
- Turbulence を 0.1 に設定します。
- Pulse Magnitude を 1.0 以上に設定します。
- Pulse Frequency を 0.25 に設定します。
- ヘリコプタの効果を作成するには、以下の手順に従います。
- 球体の Wind Zone を作成します。
- Radius をヘリコプタのサイズにあった値に設定します。
- Wind Main を 3.0 に設定します。
- Turbulence を 5.0 に設定します。
- Pulse Magnitude を 0.1 に設定します。
- Pulse Frequency を 1.0 に設定します。
- ヘリコプタに似た GameObject に Wind Zone を追加します。
- 爆発の効果を作成するには、
- ヘリコプタの場合と同じですが、Wind Main と Turbulence を素早くフェードし、効果を徐々に消します。
AnimationEditorGuide
UnityのAnimation Viewにより、Unityの中で直接 Animation Clipsを作成、編集できます。強力かつ直感的であるうえ、外部の3Dアニメーションソフトの代替として機能するよう設計されています。動きのアニメーションに加えて、エディタで、マテリアルやコンポーネントの変数をアニメーションし、アニメーションクリップに、タイムライン上の指定されたポイントで呼び出される関数であるAnimation Eventを追加します。
Animation import および Animation Scripting の詳細について各々のページを参照下さい。
アニメーションビューの手引きはビュー上の異なった領域にフォーカスを当てたセクションに分かれています。
Using the Animation View
このセクションではAnimation Viewの基本的な操作、たとえばAnimations Clipsの作成および編集についてカバーします。
Using Animation Curves
このセクションではAnimation Curvesを作成する方法およびキーフレーム(keyframes)の追加、移動と繰り返し再生モード(WrapModes)の設定について説明します。また、Animation Curvesの機能を使いこなすためのヒントを提供しています。
Editing Curves
このセクションでは、エディタ上でスムーズにナビゲーションする方法と、キー(keys)の作成、移動および接線(tangents)と接線タイプの編集について説明します。
Objects with Multiple Moving Parts
このセクションではGame Objectsを複数の移動パーツでアニメーションする方法と、ひとつ以上のAnimation Componentが選択されたGame Objectを制御できる場合の対処方法を説明します。
Using Animation Events
このセクションではAnimation Eventsを Animation Clipに追加する方法について説明します。アニメーションイベントは、アニメーションのタイムライン上の設定したタイミングでスクリプト関数を呼び出すことができます。
Page last updated: 2012-11-25animeditor-UsingAnimationEditor
Animation Viewを使って、 UnityのGame ObjectsのアニメーションするためにAnimation Clipsのプレビューと編集ができます。Animation Viewはメニューからを選択して開きます。

GameObjectのアニメーションを表示する
Animation ViewはHierarchy View、Scene ViewおよびInspectorと強力に統合されています。インスペクターと同様に、アニメーションビューでは選択されているどんなゲームオブジェクト(Game Object)も表示されます。Hierarchy ViewあるいはScene Viewを使用して ゲームオブジェクトを選択することができます。(PrefabをProject Viewで選択すると、同様に、そのアニメーションカーブを参照はできるものの、カーブを編集できるためには、シーンビューにプレハブをドラッグする必要があります。)

Animation ViewがHierarchy Viewで選択されたGame Objectを表示します。
アニメーションビュー(Animation View)の左側に表示されているのは、選択されたゲームオブジェクト(Game Object)のアニメーションプロパティの階層的なリストです。リストはインスペクタ(Inspector)のように、ゲームオブジェクトにアタッチされた、コンポーネント(Components)とマテリアル(Materials)の順に並んでいます。コンポーネントやマテリアルの折りたたみ、展開にはその横にある小さな三角形をクリックします。選択されたゲームオブジェクトに子ゲームオブジェクトがある場合、すべてのコンポーネントおよびマテリアルの後に表示されます。

アニメーションビュー(Animation View)の左側にプロパティリストがあり、インスペクタ(Inspector))のように、選択したゲームオブジェクトのコンポーネントやマテリアルを表示します。
新しいアニメーションクリップの作成
Unityでアニメーションを行うゲームオブジェクトGame Objectは、アニメーションを制御するアニメーションコンポーネント(Animation Component)が必要です。ゲームのオブジェクトが既にアニメーションコンポーネントを持っていない場合は、新しいAnimation Clipを作成するときか、Animation Modeに入るときに、アニメーションビュー(Animation View)が自動追加を行います。
選択したGame Objectで新しいAnimation Clipを作成するには、アニメーションビュー(Animation View)の右上にある2つの選択ボックスのうち右側をクリックして、を選択します。その後、Assetsフォルダ内のどこかにアニメーションクリップを保存するためのプロンプトが表示されます。ゲームのオブジェクトが既にアニメーションコンポーネント(Animation Component)を持っていない場合は、このときに自動追加されます。新しいAnimation Clipは自動的にアニメーションコンポーネントのアニメーションのリストに追加されます。

新しいAnimation Clipの作成
アニメーションビュー(Animation View)では常にアニメーションしてるゲームオブジェクト(Game Object)と編集しているアニメーションクリップ(Animation Clip)が表示されます。アニメーションビューの左上に2つの選択ボックスがあります。左の選択ボックスは、Game ObjectとアタッチされたAnimation Componentを表示し、右の選択ボックスは、編集しているAnimation Clipを表示します。

左の選択ボックスは、Game ObjectとアタッチされたAnimation Componentを表示し、右の選択ボックスは、編集しているAnimation Clipを表示します。
ゲームオブジェクトのアニメーション化
選択されたゲームオブジェクトのアニメーション・クリップの編集を開始するには、をクリックしてください。
をクリックしてAnimation Modeに入ります。これにより、Animation Modeに入り、ゲームオブジェクトへの変更はAnimation Clipに格納されます。(ゲームオブジェクトが既にAnimation Componentを持っていない場合、このときに自動的に追加されます。既存のAnimation Clipが存在しない場合は、Assetsフォルダのどこかに保存するためのプロンプトが表示されます。)
Animation Modeはを再度クリックすることでいつでも停止できます。 これによりゲームオブジェクト(Game Object)は、アニメーションモードに入る前にあった状態に戻ります。
アニメーションビュー(Animation View)のプロパティリストに表示されるプロパティもすべてアニメーションすることができます。プロパティをアニメーション化するには、そのプロパティの をクリックしてと、メニューからを選択します。さらに複数のプロパティを範囲選択し、右クリックすることで 選択したすべてのプロパティにカーブを追加することが出来ます。(Transformプロパティは.x、.y、.zプロパティがリンクされている意味で特殊であり、3つとも同時に追加されます。)

''すべてのプロパティは、をクリックするか、名前を右クリックすることでアニメーション化することができます。
Transformプロパティについては、、 のカーぶは同時に追加されます。''
アニメーションモード(Animation Mode)では、Animation Clipのどのフレームがプレビューされているか、赤の縦線で表示します。インスペクタ(Inspector)とシーンビュー(Scene View)では、アニメーションクリップのそのフレームでゲームオブジェクトが表示されます。そのフレームでのアニメーション化されたプロパティの値は、プロパティ名の右の列に示されます。

''Animation Modeで'は赤い縦線で現在プレビューしているフレームを表示します。そのフレームでアニメーション化された値は、Inspector、Scene ViewおよびAnimatipon Viewのプロパティ名の右で、プレビューされます。
タイムライン()上の任意の場所をクリックしてアニメーションクリップのフレームをプレビューまたは変更することが出来ます。タイムライン()の数字は秒とフレームとして表示されるので、1:30は1秒と30フレームを意味します。
をクリックして特定のフレームをプレビューします。
特定のフレームに直接移動するにはキーボードから入力するか、またはボタンを使用して前後のkeyframeに移動できます。また、フレーム間を移動するには、次のキーボードショートカットを使用することができます。
- カンマ() () 前のフレームに移動
- ピリオド() () 次のフレームに移動
- Alt+カンマ() () 前のキーフレーム(keyframe)に移動
- Alt+ピリオド() () 次のキーフレーム(keyframe)に移動
アニメーションモード(Animation Mode)では、シーンビュー(Scene View)でゲームオブジェクト(Game Object)移動、回転、またはスケールできます。これによりAnimation Clipの位置、回転、スケールプロパティのAnimation Curvesがない場合は自動作成され、それらのAnimation Curvesのキーが現在のフレームに自動作成され、変更したの値を対応する場所に格納します。
Inspectorを使用してGame Objectのアニメーション化できるプロパティを変更することもできます。この場合も同様にAnimation Curvesが必要に応じて作成され、現在プレビューしているフレームでアニメーションカーブにキー(keys)を作成して変更した値を格納します。アニメーション化が出来ないプロパティは、アニメーションモードのときInspectorで灰色表示されます。
により現在プレビューしているフレームのカーブ上でkeyframeを作成します。(ショートカット:「」キー)手動でキーフレーム(keyframe)を作成する場合はを使用します。これにより、現在Animation Viewに表示されているすべてのカーブでキーを作成します。プロパティリストの一部のプロパティのみカーブを表示した場合は、それらのプロパティを選択できます。特定プロパティのみに選択的にキーを追加するときに便利です。

プロパティリストでプロパティを選択する場合、そのプロパティのカーブだけが表示されます。
プレイバック(Playback)
アニメーション・クリップ(Animation Clip)は、アニメーションビュー(Animation View)のをクリックすることにより、いつでも再生することができます。
^Play button^^をクリックしてAnimation Clipを再生再生のプレイバックはタイムライン()で表示されている時間範囲内でループします。これによりAnimation Clip全体の時間を再生しなくても、作業中の の小さな部分を見直すことに注力できます。. Animation Clip全体をプレイバックするには、ズームアウトして時間範囲の全体を表示するか、キー(keys)が選択されてない状態で「」キーを押します。のナビゲーションの詳細についてはEditing Animation Curves を参照下さい。
Page last updated: 2012-11-26animeditor-AnimationCurves
プロパティリスト
Animation Clipではあらゆるアニメーション化できるプロパティはAnimation Curveを保持することが出来ます、すなわちAnimation Clipでそのプロパティを制御できます。Animation Viewのプロパティリストで、Animation Curvesをもったプロパティは色つきのカーブインジケータがあります。アニメーションプロパティにカーブを追加する方法については、Using the Animation View のセクションを参照してください。
Game Objectはかなりの数のコンポーネントおよびプロパティリストを持つことが出来るため、Animation Viewのプロパティリストもとても長くなることがあります。Animation Curvesをもつプロパティのみ表示するためにはAnimation Viewの左下にあるボタンをクリックし状態をにします。

左下にあるトグルボタンをにセットしてプロパティリストのなかでAnimation Curvesを持たないプロパティを隠します。
カーブ、キー、キーフレームを理解する
Animation Curveには複数のキー(keys)があり、カーブが通過するポイントを制御できます。 これらはCurve Editor上の小さな菱形の記号で表現される。ひとつか複数かのキー(key)を持つフレームは、キーフレーム(keyframe)と呼ばれています。キーフレーム(keyframes)は白い菱形の記号として に表示されます。
あるプロパティが現在プレビューしているフレームにキー(key)を持つ場合、カーブのインジケータは菱形になります。

''プロパティが現在プレビューしているフレームでキー(key)を持っています。 は全てのキーフレーム(keyframes )をマーキングします。
は表示されているカーブのみのキーフレームを表示します。プロパティリストでプロパティが選択されている場合はそのプロパティのみが表示され、そしては表示されていないカーブのキーをマーキングしません。

プロパティが選択されている場合、他のプロパティは表示されず、そのカーブのキーはキーフレームライン()には表示されません^。
キーフレームの追加と移動

は現在表示されているカーブのkeyframesを表示します。をダブルクリックするか、を使用してkeyframeを追加できます。
現在プレビューしてるフレームでをクリックするか、keyframeを配置すべき任意のフレームでをダブルクリックすることでkeyframeを追加することが出来ます。 これにより一度に表示されている全てのカーブにキー(key)を追加します。キーフレーム(keyframe)を追加するにはを右クリックしてコンテキストメニューからを選択することも出来ます。配置した後は、マウスを使ってkeyframesをドラッグして動かすことができます。一度に複数のキーフレーム(keyframes)を選択してドラッグすることも出来ます。キーフレーム(Keyframes$$)を削除するには、選択してを押すか、その上で右クリックしてコンテキストメニューからを選択します。
繰り返し再生モード(Wrap Mode)
Unityのアニメーションクリップ(Animation Clip)で様々な繰り返し再生モード(Wrap Mode)を設定し、たとえばアニメーション・クリップをループさせることができます。詳細についてはスクリプト・リファレンスWrapMode を参照下さい。アニメーション・クリップの繰り返し再生モード(Wrap Mode)は、Animation Viewの右下の選択ボックスで設定できます。^^Curve View^^で選択したWrap Modeをアニメーションクリップの時間範囲の外の白い枠線としてプレビューします。

アニメーション・クリップ(Animation Clip)でWrap Modeを設定すると、でプレビューします。
アニメーション化できるプロパティ(Supported Animatable Properties)
アニメーションビューAnimation Viewを使って、Game Objectの単なる位置、回転、スケールよりもはるかに多くをアニメーション化することが出来ます。任意のComponentやMaterialのプロパティはアニメーションさせることができます、それはユーザで作成したスクリプトコンポーネントのパブリック変数さえも出来ます。複雑な視覚効果や動作をもつアニメーションを作るには、関連するプロパティにAnimation Curvesを追加するだけです。
次のプロパティの種類で、アニメーションシステムがサポートされています:
- Float
- Color
- Vector2
- Vector3
- Vector4
- Quaternion
配列はサポートされておらず、構造体や上記以外のオブジェクトも同様です。
スクリプトコンポーネントでのBooleanは、アニメーションシステムによってサポートされていませんが、特定の内臓コンポーネントではサポートされています。それらのBooleanについては、の値はと等しく、それ以外の値はと等しくなります。
次は、Animation Viewを使用できる多様なことの中のひとつの例です:
- Light`のColor、Intensity``をアニメーション化し、点滅、ちらつき、脈動させます。
- ループ再生中のAudio SourceのPitch、Volume をアニメーション化して、風の吹く音、エンジン音、水の流れる音を再現しつつサウンドアセットの容量を最小限に抑えます。
- MaterialのTexture Offsetをアニメーション化し、ベルトコンベヤー、線路、流れる水、特殊効果をシミュレートします。
- 複数の楕円パーティクルエミッター(Ellipsoid Particle Emitters)のEmit状態、Velocitiesをアニメーション化することで壮大な花火や噴水のディスプレイを作成します。
- ユーザ定義のスクリプトコンポーネントをアニメーション化して、時間の経過とともに動作に変化をもたせます。
Animation Curvesを用いてゲームロジックを制御する場合、Unityでプレイバックおよびサンプリング している方法に注意する必要があります。
回転の補間方法の種類について(Rotation Interpolation Types)
Unityでは回転は内部でクォータニオン(Quaternions)として表されます。クォータニオンは.x, .y, .zおよび .wから構成されていて、一般的には、これらの使い方を熟知していないかぎりこれらの値を手動で変更しません。 その代わりに、回転は一般的にオイラー角(Euler Angles)で操作し、.x、.y、.zの値がそれぞれの3つの軸の周りの回転を表しています。
2つの回転の間を補間方法は、QuaternionかEuler Anglesの値のどちらかで実行することになります。アニメーションビュー(Animation View)を使用すると、Transform回転をアニメーション化するときに使用する補間方法を選択することができます。しかし回転は、どの補間方法を選択しているかにかかわらず、常にEuler Anglesの形で表示されます。

Transform回転はEuler Anglesによる補間あるいはQuaternionによる補間を使用できます
クォータニオン補間(Quaternion Interpolation)
クォータニオン補間は、2つの回転の間の最短経路に沿って常に良い補間を生成します。これがジンバルロックなどの回転補間の乱れを回避することができます。しかしクォータニオン補間は180度以上の回転を表すことはできず、それは逆方向のほうが短いためです。クォータニオン補間を使用して二つのキーを180度以上離すと、実際に回転がスムーズであるにも関わらず、カーブは不連続に見えます、単純に逆方向が短いからそちらに回転しまうためです。180度より大きい回転が必要な場合は、追加のキーを間に配置する必要があります。クォータニオン補間を使用する場合、ひとつのカーブにおけるキーや接線を変更すると、他の2つのカーブも形状が変更されてしまい、それは3つのカーブ全て内部ではクォータニオンで表されているためです。クォータニオン補間を使用するときは、キーは常リンクされているため、3つのうち1つのカーブで特定のタイミングにおいてキーを作成すると、残り2つのカーブもそのタイミングでキーが作成されます。

クォータニオン補間を使用するとき、270度離れて二つのキーを配置していた場合は逆方向、すなわち90度、に値が補間されます。
オイラー角補間(Euler Angles Interpolation)
オイラー角の補間は、ほとんどの人々が作業に慣れているものです。オイラー角は任意の大きな回転を表すことが出来、.x、 .y、.zのカーブは互いに独立しています。オイラー角の補間は、複数の軸の周りを同時に回転するときに発生する、ジンバルロックなどの回転の乱れが生じるkとがあるが、作業は直感的で一度に1つの軸の周りを回転させるのでシンプルです。オイラー角補間が使用されている場合、Unityは内部で内部的に使用しているクォータニオン表現にカーブの焼き込みを行っています。これは、外部のプログラムからUnityにアニメーションをインポートするときに起こることに似ています。このカーブの焼き込みは処理過程で余分なキーを追加する可能性があり、接線の種類がの場合、サブフレームのレベルでは完全には正確でない場合があることに留意下さい。
Page last updated: 2012-11-26EditingCurves
Curves は様々なものに使用でき、編集できる曲線を使用する、Unity の各種コントロールがあります。
- Animation View は、曲線を使用して、Animation Clip で時間の経過と共に、プロパティをアニメート化します。

アニメーション ビュー - スクリプト コンポーネントは、様々なものに使用する AnimationCurve タイプのメンバー変数を持つことができます。 インスペクタでこれらをクリックすると、Curve Editor が開きます。

曲線エディタ - Audio Source コンポーネントは、オーディオ ソースまでの距離の関数として、ロールオフやその他のプロパティを制御するため、曲線を使用します。

インスペクタでのオーディオ ソース コンポーネントでの距離関数曲線。
これらのコントロールには微妙に差があるため、curves は、そのすべてにおいてまったく同じ方法で編集できます。 このページでは、これらのコントロールで曲線をナビゲートおよび編集できます。
曲線上でのキーの追加と移動
key が置かれる点にある曲線をクリックすることで、key を曲線に追加できます。 また、曲線を右クリックして、コンテキスト メニューから を選択することでも key を追加できます。
一旦置くと、keys をマウスでドラッグできるようになります。
- key をクリックして選択します。 マウスで選択した key をドラッグします。
- ドラッグ中、key をグリッドにスナップするには、ドラッグ中に Mac の場合は / Windows の場合は、 を押したままにします。
また、複数の keys を一度にドラッグすることもできます。
- 複数の keys を一度に選択するには、 を押しながら、キーをクリックします。
- 選択した key を解除するには、.を押したまま、再度キーをクリックします。
- 長方形のエリア内で、すべての keys を選択するには、空白の点をクリックして、ドラッグし、長方形の選択エリアを形成します。
- を押したままにすることで、既存の選択したキーに長方形の選択エリアを追加することも出来ます。
Keys を選択して、 を押すか、Keys を右クリックし、コンテキスト メニューから を選択することで、Keys を削除できます。
曲線ビューのナビゲート
Animation View を扱う際に、作業したい曲線の詳細を拡大したり、縮小して全体像を表示することができます。
を押すことで、常に表示されている曲線や選択したキー全体をフレーム選択できます。
縮小拡大
マウスのスクロール ホイールやトラック パッドのズーム機能またはマウスで右ドラッグしながら、 押すことで、曲線ビューを できます。
水平または垂直軸でのみズームすることができます。
- ドラッグ中に Mac の場合は / Windows の場合は、 を押したままにして、 することで、水平にズームできます。
- を押したまま することで、垂直にズームできます。
さらに、スクロールバーの後端キャップをドラッグして、曲線ビューに表示されるエリアを縮小または拡大できます。
パン
マウスで中央クリックするか、左ドラッグしながら、 押すことで、曲線ビューを できます。
接線の編集
key には、2 本の tangents があり、1 本は入射勾配の左側、もう 1 本は出射勾配の右側にあります。 接線はキー間の曲線の形状を制御します。 Animation View には、曲線の形状を簡単に制御するのに使用できる複数の種類の接線があります。 key に対する接線のタイプは、キーを右クリックすることで選択できます。

key を右クリックして、そのキーに対する接線のタイプを選択します。
キーを置いた際にアニメート化値を円滑に変更するには、左右の接線が同一線上にある必要があります。 次の接線のタイプにより、円滑性が確保されます。
- : 接線が自動的に設定され、キーを通じて、曲線を円滑化します。
- : 接線のハンドルをドラッグすることで、接線を自由に設定できます。 円滑性を確保するため、接線は同一線上に固定されます。.
- : 接線が水平に設定されます (これは、 の特殊なケースです。)
円滑性が不要な場合があります。 接線が の場合、左右の接線を個別に設定でいます。 左右の接線を次の接線タイプにそれぞれ設定できます。
- : 接線のハンドルをドラッグすることで、接線を自由に設定できます。
- : 接線は隣接するキーの方を向きます。 両端にある接線を に設定することで、直線曲線成分を作成できます。
- : 曲線は、2 つのキー間で一定の値を保ちます。 左キーの値は曲線線分を決定します。
animeditor-MultipleParts
Game Objectsで複数の動作パーツがあるものをアニメーション化したい場合、例えば移動する銃身のある砲台や、複数の人体パーツから構成されるキャラクターなど、があるとします。全てのパーツを親のひとつのコンポーネントでアニメーション化することは出来ますが、追加のアニメーションを子オブジェクトに持たせたほうが便利なケースがあります。
子ゲームオブジェクトのアニメーション(Animating Child Game Objects)
Game Object階層はAnimation Viewの左側のパネルで表示されています。
子ゲームオブジェクトにアクセスするにはオブジェクトの名前の隣にある三角形の展開アイコンを使います。子オブジェクトのプロパティは親と同様にアニメーション化できます。

子Game ObjectsはAnimation View上で展開できます。
別の方法として、アニメーション化したい子ゲームオブジェクトを階層パネルあるいはシーンビューから単に選択することが出来ます。これにより、親のアニメーションコンポーネントでアニメーションデータがハンドリングされているにも関わらず、子オブジェクトのみがプロパティリストで表示されます。

Hierarchy Viewで選択した子Game ObjectsはAnimation View上で表示されます。
複数のアニメーションコンポーネントのハンドリング(Handling Multiple Animation Components)
親オブジェクトとその子オブジェクトのひとつが、ともにアニメーションコンポーネントを持つ場合、どちらのコンポーネントでも子オブジェクトをアニメーション化できます。プロパティリストを使用してどちらを使用するか選択します。

プロパティリストから編集したいアニメーションコンポーネントを選択します。
例として、複数のキャラクター(主人公と仲間、等)がいて各々アニメーションコンポーネントを持っているとします。同じシーンでもう一つのゲームオブジェクトでカットシーンを制御するアニメーションコンポーネントを持たせることが出来ます。カットシーンで主人公と仲間のふたりとも歩き回る場合、両方の位置をカットシーンのコントローラから制御することが出来ます。しかし、両方のキャラクターはカットシーンオブジェクトの子オブジェクトとしないと、そのアニメーションコンポーネントで制御することは出来ません。
Page last updated: 2012-11-25animeditor-AnimationEvents
Animation Eventsを使用することでアニメーションクリップを最大限に活用することが出来、タイムラインの特定のタイミングでオブジェクトのスクリプトに含まれる関数呼び出しを行うことが出来ます。
関数はアニメーションイベントにより呼び出され、オプションとして引数をひとつ付加することが出来ます。引数はfloat, string, int, オブジェクト参照、あるいはAnimationEventオブジェクトとすることが出来ます。AnimationEventオブジェクトはメンバ変数を持ち、float, string, int, オブジェクト参照を、関数呼び出しをトリガーしたイベントに関する情報とともに、同時に関数に受け渡すことが出来ます。
// This JavaScript function can be called by an Animation Event
function PrintFloat (theValue : float) {
Debug.Log ("PrintFloat is called with a value of " + theValue);
}
クリップにアニメーションイベントを追加するには、現在の再生位置にをクリックするか、アニメーションの任意の位置でイベントをトリガーしたいタイミングで、をダブルクリックします。イベントを追加した後、マウスでドラッグすることにより位置を再度変更することも出来ます。イベントを削除するには、選択してを押すか、右クリックしてコンテキストメニューからを選択します。

Animation Eventsをに表示。新規にAnimation Eventsを追加するにはをダブルクリックするか、を使用します。
イベントを追加すると、関数の名前と渡したい引数の値を入力するためにプロンプトがダイアログボックスとして表示されます。

Animation Eventでポップアップするダイアログボックスで、どの関数にどの引数の値で呼び出しするか指定出来ます。
クリップに追加されたイベントはイベントラインでマーカー表示されます。マーカーの上をマウスオーバーすることで関数と引数値がヒント表示されます。

アニメーションイベント マーカー()の上をマウスオーバーすることで、呼出する関数と引数値が表示されます。
GUI Scripting Guide
概要
UnityGUI により、機能を備えた多様な GUI を非常に素早く、簡単に作成できます。 GUI オブジェクトを作成し、手動で配置し、その機能を処理するスクリプトを記述する代わりに、このすべてを小さいコードで一度に行います。 これは、GUI Controlsのインスタンス化、配置、定義、の全てを一度の関数呼び出しで行います。
例えば、次のコードにより、エディタやその他で作業することなく、ボタンの作成とハンドリングが出来ます:
// JavaScript
function OnGUI () {
if (GUI.Button (Rect (10,10,150,100), "I am a button")) {
print ("You clicked the button!");
}
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
void OnGUI () {
if (GUI.Button (new Rect (10,10,150,100), "I am a button")) {
print ("You clicked the button!");
}
}
}

前述のコードで作成されたボタン
この例は非常に簡単ですが、UnityGUI で使用できる非常に強力で、複雑な手法があります。GUI構築は幅広いテーマですが、次の項で、できる限り早くスピードに乗れるお手伝いをします。本ガイドはそのまま読むか、参考資料として活用できます。
UnityGUI Basics
このセクションでは、UnityGUIの最も重要なコンセプトをカバーし、プロジェクトにコピー&ペーストできるいくつかのサンプルの他、概要を提供します。 UnityGUI は、非常に扱いやすいため、始めるにはこれは良い場所です。
Controls
このセクションでは、UnityGUI で使用できるすべてのコントロールを一覧表示します。 本項では、コードのサンプルと処理結果の画像を提供します。
Customization
GUIの外観をゲームの雰囲気にあわせて変更するのは大事なことです。UnityGUIでのコントロールはすべてGUIStylesとGUISkinsでカスタマイズ出来、このセクションでは、その使用方法について説明します。
Layout Modes
UnityGUIには、GUIを配置するための2つの方法があります。画面上で各コントロールを配置するか、HTMLテーブルと同じような働きをする自動レイアウトシステムを使用できます。どちらのシステムも好きに使用することが可能であり、2つを自由にミックスすることもできます。このセクションでは、例を含む、2つのシステム間の機能的差を説明します。
Extending UnityGUI
UnityGUIは、新しいコントロールのタイプで非常に簡単に拡張できます。 この章では、Unityのイベントシステムへの統合を備えた、簡単な「複合」コントロールを作成する方法を示します。
Extending Unity Editor
UnityエディタのGUIは、実際にUnityGUIを使用して記述されています。すなわち、ゲーム上のGUIに使用するコードと同等の方法で、エディタも完全に拡張できます。 また、カスタムのエディタ GUI作成時に便利な、エディタ固有のGUIウィジェットがいくつかあります。
Page last updated: 2012-11-26gui-Basics
このセクションでは、UnityGUIでControls のスクリプティングの基本について説明します。
UnityGUI でのコントロールの作成
UnityGUI コントロールは、OnGUI()と呼ばれる特殊な関数を使用します。 OnGUI()関数は、Update()関数同様、含んでいるスクリプトが有効になるたびに呼び出されます。
GUIコントロール自体の構造は非常にシンプルです。この構造は、次の例で明らかになります。
/* レベルローダーの例 */
// JavaScript
function OnGUI () {
// Make a background box
GUI.Box (Rect (10,10,100,90), "Loader Menu");
// Make the first button. If it is pressed, Application.Loadlevel (1) will be executed
if (GUI.Button (Rect (20,40,80,20), "Level 1")) {
Application.LoadLevel (1);
}
// Make the second button.
if (GUI.Button (Rect (20,70,80,20), "Level 2")) {
Application.LoadLevel (2);
}
}
//C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
void OnGUI () {
// バックグラウンド ボックスを作成します。
GUI.Box(new Rect(10,10,100,90), "Loader Menu");
// 1 つ目のボタンを作成します。 押すと、Application.Loadlevel (1) が実行されます。
if(GUI.Button(new Rect(20,40,80,20), "Level 1")) {
Application.LoadLevel(1);
}
// 2 つ目のボタンを作成します。
if(GUI.Button(new Rect(20,70,80,20), "Level 2")) {
Application.LoadLevel(2);
}
}
}
この例は、完全な、機能的なレベルローダーです。 このスクリプトをコピー&ペーストして、GameObject に追加する場合、Play Mode に入ると、次のメニューが表示されます:

サンプルのコードで作成されるLoader Menu
サンプルコードの詳細を見てみましょう。
最初のGUI行、GUI.Box (Rect (10,10,100,90), "Loader Menu");には、ヘッダテキストLoader Menuのある Box が表示されます。これから少しの間見ていく通常のGUIコントロールの宣言スキームに従います。
次のGUI行は、Buttonコントロールの宣言になります。Box Controlの宣言とは若干異なることに留意ください。具体的には、Button 宣言全体がif文内に置かれます。 ゲーム実行中にボタンをクリックすると、このif文は真を返し、ifブロック内のコードが実行されます。
OnGUI()コードはフレームごとに呼び出されるので、GUIコントロールを明示的にcreateまたはdestroyする必要はありません。 コントロールを宣言する行は、createする行と同じです。 特定の時間にControlを表示する必要がある場合、スクリプティングロジックを使用して、行うことができます。
/* 点滅するボタンの例 */
// JavaScript
function OnGUI () {
if (Time.time % 2 < 1) {
if (GUI.Button (Rect (10,10,200,20), "Meet the flashing button")) {
print ("You clicked me!");
}
}
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
void OnGUI () {
if (Time.time % 2 < 1) {
if (GUI.Button (new Rect (10,10,200,20), "点滅するボタンを表示")) {
print ("クリックしました!");
}
}
}
}
ここで、GUI.Button()は、毎秒呼び出されるだけなので、ボタンは表示されたり、消えたりします。 当然、ユーザーはボタンが表示されている時にのみクリックできます。
ご覧のとおり、必要なロジックを使用して、GUIコントロールが表示され、関数となるタイミングを制御します。 各コントロールの宣言の詳細を詳しく見てみましょう。
コントロールの分解(Anatomy of a Control)
GUIコントロールの宣言時に必要な、重要な情報は次の3つです。
Type (Position, Content)
この構造が 2 つの引数を持つ関数であることを確認します。 この構造の詳細を詳しく見てみましょう。
Type
Type は、Control Type で、Unity の GUI class または GUILayout class で関数を呼び出すことで宣言されます。これについては、本ガイドの Layout Modes で詳細に説明します。 例えば、GUI.Label()は非インタラクティブなラベルを作成します。 各種コントロールのタイプについては、すべて、本ガイドの Controls で後述します。
Position
Position は、GUIコントロール関数の 1 つ目の引数です。 この引数自体は、Rect()によって提供されます。Rect()は次の 4 つのプロパティを定義します。 left-most position、top-most position、total width、total height。これらの値はすべて integers で提供されます。これらの値はピクセル値に対応しています。UnityGUIコントロールはすべてScreen Space で機能します。これは、パブリッシュされたプレイヤーのピクセル単位の解像度です。
座標系は上-左ベースになります。Rect(10, 20, 300, 100)は、0,20と、座標310,120の端で始まる 長方形を定義します。Rect()の値の2つ目のペアは、合計の幅と高さであり、コントロールが終了する座標ではありません。このため、前述の例は、300,100ではなく、310,120で終了します。
Screen.widthとScreen.heightプロパティを使用して、プレイヤーで使用できる画面空間の合計寸法を取得できます。次の例は、これがどのようにして行われるのかを理解する手助けとなります:
/* Screen.widthとScreen.heightの例 */
// JavaScript
function OnGUI () {
GUI.Box (Rect (0,0,100,50), "Top-left");
GUI.Box (Rect (Screen.width - 100,0,100,50), "Top-right");
GUI.Box (Rect (0,Screen.height - 50,100,50), "Bottom-left");
GUI.Box (Rect (Screen.width - 100,Screen.height - 50,100,50), "Bottom-right");
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
void OnGUI(){
GUI.Box (new Rect (0,0,100,50), "Top-left");
GUI.Box (new Rect (Screen.width - 100,0,100,50), "Top-right");
GUI.Box (new Rect (0,Screen.height - 50,100,50), "Bottom-left");
GUI.Box (new Rect (Screen.width - 100,Screen.height - 50,100,50), "Bottom-right");
}
}

前述の例で配置されたボックス
Content
GUI コントロールの 2 つ目の引数は、コントロールで表示される実際の内容です。 コントロールでテキストや画像を表示したい場合がほとんどです。 テキストを表示するには、次のように Content引数として文字列を渡します。
/* 文字列 Contentの例 */
// JavaScript
function OnGUI () {
GUI.Label (Rect (0,0,100,50), "This is the text string for a Label Control");
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
void OnGUI () {
GUI.Label (new Rect (0,0,100,50), "これは、ラベル コントロールのテキスト文字列です");
}
}
画像を表示するには、パブリック変数 Texture2D を宣言し、次のように Content 引数として変数名を渡します:
/* Texture2D Contentの例 */
// JavaScript
var controlTexture : Texture2D;
function OnGUI () {
GUI.Label (Rect (0,0,100,50), controlTexture);
}
// C#
public Texture2D controlTexture;
...
void OnGUI () {
GUI.Label (new Rect (0,0,100,50), controlTexture);
}
以下は、より現実に近いシナリオの例です:
/* ボタン Content の例 */
// JavaScript
var icon : Texture2D;
function OnGUI () {
if (GUI.Button (Rect (10,10, 100, 50), icon)) {
print ("アイコンをクリックしました");
}
if (GUI.Button (Rect (10,70, 100, 20), "これはテキストです")) {
print ("テキスト ボタンをクリックしました");
}
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
public Texture2D icon;
void OnGUI () {
if (GUI.Button (new Rect (10,10, 100, 50), icon)) {
print ("アイコンをクリックしました");
}
if (GUI.Button (new Rect (10,70, 100, 20), "これはテキストです")) {
print ("テキスト ボタンをクリックしました");
}
}
}

前述の例で作成されたボタン
3つめのオプションとして、GUI コントロール内で画像とテキストを一緒に表示する方法があります。GUIContent オブジェクトを Content 引数として渡し、GUIContent 内で表示する文字列と画像を定義できます。
/* GUIContent を使用して、画像と文字列を表示します */
// JavaScript
var icon : Texture2D;
function OnGUI () {
GUI.Box (Rect (10,10,100,50), GUIContent("これはテキストです", icon));
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
public Texture2D icon;
void OnGUI () {
GUI.Box (new Rect (10,10,100,50), new GUIContent("これはテキストです", icon));
}
}
GUIContentでTooltip を定義し、マウスオーバーすると、GUIのどこの場所でもヒント(Tooltip)を表示できます。
/* GUIContentを使用して、Tooltipを表示します */
// JavaScript
function OnGUI () {
// This line feeds "これはTooltipです" into GUI.tooltip
GUI.Button (Rect (10,10,100,20), GUIContent ("クリックしてください", "これはTooltipです"));
// This line reads and displays the contents of GUI.tooltip
GUI.Label (Rect (10,40,100,20), GUI.tooltip);
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
void OnGUI () {
// This line feeds "これはTooltipです" into GUI.tooltip
GUI.Button (new Rect (10,10,100,20), new GUIContent ("クリックしてください", "これはTooltipです"));
// This line reads and displays the contents of GUI.tooltip
GUI.Label (new Rect (10,40,100,20), GUI.tooltip);
}
}
大胆に行くならば、GUIContent を使用して文字列、アイコンおよびTooltipを表示できます!
/* GUIContent を使用して、画像、文字列およびヒント(Tooltip)を表示します */
// JavaScript
var icon : Texture2D;
function OnGUI () {
GUI.Button (Rect (10,10,100,20), GUIContent ("クリックしてください", icon, "これはTooltipです"));
GUI.Label (Rect (10,40,100,20), GUI.tooltip);
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
public Texture2D icon;
void OnGUI () {
GUI.Button (new Rect (10,10,100,20), new GUIContent ("クリックしてください", icon, "これはTooltipです"));
GUI.Label (new Rect (10,40,100,20), GUI.tooltip);
}
}
GUIContent's constructor のスクリプティング リファレンス ページで幅広いサンプルの一覧を参照下さい。
Page last updated: 2012-11-26gui-Controls
コントロールのタイプ
各種 GUI Controls を作成できます。 本項では、すべての表示および相互作用コントロールを一覧表示します。 その他にもコントロールのレイアウトに影響する GUI 機能があり、詳細については、本ガイドの Layout に記載されています。
Label
ラベルは非インタラクティブです。 表示専用です。 クリックまたは移動できません。 情報表示目的のみにするのがベストです。
/* GUI.Label example */
// JavaScript
function OnGUI () {
GUI.Label (Rect (25, 25, 100, 30), "Label");
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
void OnGUI () {
GUI.Label (new Rect (25, 25, 100, 30), "Label");
}
}

サンプルのコードで作成されたラベル
Button
ボタンは一般的なインタラクティブ ボタンです。 マウスを押したままにする時間に関係なく、クリックすると、1 回に対応します。 この対応はボタンを押した途端に発生します。
基本的な使用
UnityGUI で、ボタンをクリックすると、真を返します。 ボタンをクリックした際に、コードを実行するには、if文で GUI.Button 関数をラップします。 if文内には、ボタンをクリックした際に実行されるコードがあります。
/* GUI.Button example */
// JavaScript
function OnGUI () {
if (GUI.Button (Rect (25, 25, 100, 30), "Button")) {
// このコードは、ボタンをクリックすると実行されます。
}
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
void OnGUI () {
if (GUI.Button (new Rect (25, 25, 100, 30), "Button")) {
// このコードは、ボタンをクリックすると実行されます。
}
}
}

サンプルのコードで作成されたボタン
RepeatButton
RepeatButtonは、通常のButtonの一種です。 違いは、RepeatButtonは、マウス ボタンが押されたままの毎フレームごとに反応するということです。 これにより、クリック アンド ホールド機能を作成できます。
基本的な使用
UnityGUI で、RepeatButton をクリックすると、毎フレームごとに真を返します。 ボタンをクリックした際にコードを実行するには、if文で GUI.RepeatButton 関数をラップします。 if文内には、RepeatButton がクリックされたまま実行されるコードがあります。
/* GUI.RepeatButton example */
// JavaScript
function OnGUI () {
if (GUI.RepeatButton (Rect (25, 25, 100, 30), "RepeatButton")) {
// このコードは、RepeatButton をクリックしたままの毎フレームで実行されます。
}
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
void OnGUI () {
if (GUI.RepeatButton (new Rect (25, 25, 100, 30), "RepeatButton")) {
// このコードは、RepeatButton をクリックしたままの毎フレームで実行されます。
}
}
}

サンプルのコードで作成されたリピート ボタン
TextField
TextFieldコントロールは、テキスト文字列を含む、インタラクティブで、編集可能な 1 行のフィールドです。
基本的な使用
TextField は、常に文字列を表示します。 TextField に表示する文字列を渡す必要があります。 文字列に編集を行うと、TextField 関数は編集した文字列を返します。
/* GUI.TextField example */
// JavaScript
var textFieldString = "text field";
function OnGUI () {
textFieldString = GUI.TextField (Rect (25, 25, 100, 30), textFieldString);
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
private string textFieldString = "text field";
void OnGUI () {
textFieldString = GUI.TextField (new Rect (25, 25, 100, 30), textFieldString);
}
}

サンプルのコードで作成された TextField
TextArea
TextFieldコントロールは、テキスト文字列を含む、インタラクティブで、編集可能な複数行のエリアです。
基本的な使用
TextArea は、常に文字列を表示します。 TextArea に表示する文字列を渡す必要があります。 文字列に編集を行うと、TextArea 関数は編集した文字列を返します。
/* GUI.TextArea example */
// JavaScript
var textAreaString = "text area";
function OnGUI () {
textAreaString = GUI.TextArea (Rect (25, 25, 100, 30), textAreaString);
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
private string textAreaString = "text area";
void OnGUI () {
textAreaString = GUI.TextArea (new Rect (25, 25, 100, 30), textAreaString);
}
}

サンプルのコードで作成された TextArea
Toggle
Toggleコントロールは、持続するオン/オフ状態を持つチェックボックスを作成します。 ユーザーは、状態をクリックすることで、状態を変更できます。
基本的な使用
Toggleのオン/オフ状態は、真/偽のブール値で表されます。 Toggle に実際の状態を表示させるために、パラメータとして、ブール値を渡す必要があります。 Toggle 関数は、クリックすると、新しいブール値をかえします 。 この相互作用を理解するためには、Toggle 関数の返された値を許可するため、ブール値を割り当てる必要があります。
/* GUI.Toggle example */
// JavaScript
var toggleBool = true;
function OnGUI () {
toggleBool = GUI.Toggle (Rect (25, 25, 100, 30), toggleBool, "Toggle");
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
private bool toggleBool = true;
void OnGUI () {
toggleBool = GUI.Toggle (new Rect (25, 25, 100, 30), toggleBool, "Toggle");
}
}

サンプルのコードで作成されたトグル
Toolbar
Toolbarコントロールは基本的にButtonの行です。'' 一度に有効にできるのは ツールバー上の Button 1 つだけで、別の Button がクリックされるまで、有効になります。 この動作は、通常のツールバーの動作をエミュレートします。 ツールバー出任意の数の Button を定義できます。
基本的な使用
ツールバーでアクティブな Button は、整数を通じて追跡できます。 関数の引数として整数を渡す必要があります。 ツールバーをインタラクティブにするには、関数の返す値に整数を割り当てる必要があります。 渡す内容配列内の要素数によって、ツールバー内に表示される Button の数が決まります。
/* GUI.Toolbar の例 */
// JavaScript
var toolbarInt = 0;
var toolbarStrings : String[] = ["Toolbar1", "Toolbar2", "Toolbar3"];
function OnGUI () {
toolbarInt = GUI.Toolbar (Rect (25, 25, 250, 30), toolbarInt, toolbarStrings);
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
private int toolbarInt = 0;
private string[] toolbarStrings = {"Toolbar1", "Toolbar2", "Toolbar3"};
void OnGUI () {
toolbarInt = GUI.Toolbar (new Rect (25, 25, 250, 30), toolbarInt, toolbarStrings);
}
}

サンプルのコードで作成されたツールバー
SelectionGrid
SelectionGridコントロールは複数行のツールバーです。 グリッドの行列の数を決定できます。 一度にアクティブにできる Button の数は 1 つだけです。
基本的な使用
SelectionGrid でアクティブな Button は、整数を通じて追跡できます。 関数の引数として整数を渡す必要があります。 SelectionGrid をインタラクティブにするには、関数の返す値に整数を割り当てる必要があります。 渡す内容配列内の要素数によって、SelectionGrid 内に表示される Button の数が決まります。 関数の引数を通じて、列の数も指定できます。
/* GUI.SelectionGrid example */
// JavaScript
var selectionGridInt : int = 0;
var selectionStrings : String[] = ["Grid 1", "Grid 2", "Grid 3", "Grid 4"];
function OnGUI () {
selectionGridInt = GUI.SelectionGrid (Rect (25, 25, 100, 30), selectionGridInt, selectionStrings, 2);
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
private int selectionGridInt = 0;
private string[] selectionStrings = {"Grid 1", "Grid 2", "Grid 3", "Grid 4"};
void OnGUI () {
selectionGridInt = GUI.SelectionGrid (new Rect (25, 25, 300, 60), selectionGridInt, selectionStrings, 2);
}
}

サンプルのコードで作成された SelectionGrid
HorizontalSlider
HorizontalSliderコントロールは、ドラッグして、事前に決定された最小および最大値間の値を変更できる通常の水平スライディング ノブです。
基本的な使用
The position of the Slider knob is stored as a float. ノブの位置を表示するには、関数内の引数の 1 つとして、その浮動小数を与えます。 さらに、最小値および最大値を決定する 2 つの値があります。 スライダ ノブを調整可能にするため、スライダ値の浮動小数を Slider 関数の返し値に割り当てます。
/* Horizontal Slider example */
// JavaScript
var hSliderValue : float = 0.0;
function OnGUI () {
hSliderValue = GUI.HorizontalSlider (Rect (25, 25, 100, 30), hSliderValue, 0.0, 10.0);
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
private float hSliderValue = 0.0f;
void OnGUI () {
hSliderValue = GUI.HorizontalSlider (new Rect (25, 25, 100, 30), hSliderValue, 0.0f, 10.0f);
}
}

サンプルのコードで作成された水平スライダ
VerticalSlider
VerticalSliderコントロールは、ドラッグして、事前に決定された最小および最大値間の値を変更できる通常の水平スライディング ノブです。
基本的な使用
スライダ ノブの位置は浮動小数として記憶されます。 ノブの位置を表示するには、関数内の引数の 1 つとして、その浮動小数を与えます。 さらに、最小値および最大値を決定する 2 つの値があります。 スライダ ノブを調整可能にするため、スライダ値の浮動小数を Slider 関数の返し値に割り当てます。
/* Vertical Slider の例 */
// JavaScript
var vSliderValue : float = 0.0;
function OnGUI () {
vSliderValue = GUI.VerticalSlider (Rect (25, 25, 100, 30), vSliderValue, 10.0, 0.0);
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
private float vSliderValue = 0.0f;
void OnGUI () {
vSliderValue = GUI.VerticalSlider (new Rect (25, 25, 100, 30), vSliderValue, 10.0f, 0.0f);
}
}

サンプルのコードで作成された垂直スライダ
HorizontalScrollbar
HorizontalScrollbarコントロールは、Sliderコントロールと同じですが、ウェブ ブラウザまたはワード プロセッサに対するスクローリング要素に視覚的に似ています。 このコントロールは、ScrollViewコントロールをナビゲートするのに使用されます。
基本的な使用
Horizontal Scrollbar は、1 つの例外除いて、Horizontal Slider と同じく実行されます。 Scrollbar のノブ自体の幅をコントロールする引数が他にもあります。
/* Horizontal Scrollbar example */
// JavaScript
var hScrollbarValue : float;
function OnGUI () {
hScrollbarValue = GUI.HorizontalScrollbar (Rect (25, 25, 100, 30), hScrollbarValue, 1.0, 0.0, 10.0);
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
private float hScrollbarValue;
void OnGUI () {
hScrollbarValue = GUI.HorizontalScrollbar (new Rect (25, 25, 100, 30), hScrollbarValue, 1.0f, 0.0f, 10.0f);
}
}

サンプルのコードで作成された水平スクロール バー
VerticalScrollbar
VerticalScrollbarコントロールは、Sliderコントロールと同じですが、ウェブ ブラウザまたはワード プロセッサに対するスクローリング要素に視覚的に似ています。 このコントロールは、ScrollViewコントロールをナビゲートするのに使用されます。
基本的な使用
Vertical Scrollbar は、1 つの例外除いて、Vertical Slider と同じく実行されます。 Scrollbar のノブ自体の高さをコントロールする引数が他にもあります。
/* Vertical Scrollbar example */
// JavaScript
var vScrollbarValue : float;
function OnGUI () {
vScrollbarValue = GUI. VerticalScrollbar (Rect (25, 25, 100, 30), vScrollbarValue, 1.0, 10.0, 0.0);
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
private float vScrollbarValue;
void OnGUI () {
vScrollbarValue = GUI. VerticalScrollbar (new Rect (25, 25, 100, 30), vScrollbarValue, 1.0f, 10.0f, 0.0f);
}
}

サンプルのコードで作成された垂直スクロール バー
ScrollView
ScrollViewsは、はるかに大きいコントロールのセットの表示エリアを表示するコントロールです。
基本的な使用
ScrollViews は、引数として、2 つのRectsを必要とします。 1 つ目のRectは、画面上の表示できる ScrollView エリアの位置とサイズを定義します。 2 つ目のRectは、表示できるエリア内に含まれるスペースのサイズを定義します。 表示できるエリア内のスペースが、表示できるエリアよりも大きい場合、スクロール バーは必要に応じて表示されます。 表示される表示できるエリアの位置を格納する 2D Vector を割り当て、渡す必要もあります。
/* ScrollView example */
// JavaScript
var scrollViewVector : Vector2 = Vector2.zero;
var innerText : String = "I am inside the ScrollView";
function OnGUI () {
// ScrollView を開始します。
scrollViewVector = GUI.BeginScrollView (Rect (25, 25, 100, 100), scrollViewVector, Rect (0, 0, 400, 400));
// ScrollView 内に何かを挿入します。
innerText = GUI.TextArea (Rect (0, 0, 400, 400), innerText);
// ScrollView を終了します。
GUI.EndScrollView();
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
private Vector2 scrollViewVector = Vector2.zero;
private string innerText = "I am inside the ScrollView";
void OnGUI () {
// ScrollView を開始します。
scrollViewVector = GUI.BeginScrollView (new Rect (25, 25, 100, 100), scrollViewVector, new Rect (0, 0, 400, 400));
// ScrollView 内に何かを挿入します。
innerText = GUI.TextArea (new Rect (0, 0, 400, 400), innerText);
// ScrollView を終了します。
GUI.EndScrollView();
}
}

サンプルのコードで作成された ScrollView
Window
Windowは、コントロールのドラッグ可能な容器です。 Window はクリックすると、フォーカスを得たり、失ったりします。 このため、Window は他のコントロールとは若干異なる形で実行されます。 各 Window にはIDがあり、その内容は、Window がフォーカスを持っている時に呼び出される個々の関数内で宣言されます。
基本的な使用
Window は、適切に機能するのに別の関数を必要とする唯一のコントロールです。 ID番号と Window で実行される関数名を渡す必要があります。 Window 関数内で、実際の行動または含まれるコントロールを作成します。
/* Window example */
// JavaScript
var windowRect : Rect = Rect (20, 20, 120, 50);
function OnGUI () {
windowRect = GUI.Window (0, windowRect, WindowFunction, "My Window");
}
function WindowFunction (windowID : int) {
// ウィンドウ内にコントロールを描画します。
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
private Rect windowRect = new Rect (20, 20, 120, 50);
void OnGUI () {
windowRect = GUI.Window (0, windowRect, WindowFunction, "My Window");
}
void WindowFunction (int windowID) {
// ウィンドウ内にコントロールを描画します。
}
}

サンプルのコードで作成されたウィンドウ
GUI.changed
ユーザーが GUI で何らかのアクション (ボタンのクリック、スライダのドラッグなど) を行ったかを検出するには、スクリプトからGUI.changed値を読みます。 ユーザーが何かを行なっている場合、真を設定し、ユーザー入力を簡単に検証できます。
よくあるシナリオは、ツールバーで、ツールバーでクリックされたボタンに基づき、特定の値を変更したい場合です。 ボタンの 1 つがクリックされている場合にのみ、OnGUI()を呼び出すたびに値を割り当てたくないとします。
/* GUI.changed example */
// JavaScript
private var selectedToolbar : int = 0;
private var toolbarStrings = ["One", "Two"];
function OnGUI () {
// どのボタンがアクティブ化、このフレームでクリックされたかを決定します。
selectedToolbar = GUI.Toolbar (Rect (50, 10, Screen.width - 100, 30), selectedToolbar, toolbarStrings);
// ユーザーがこのフレームで新しいツールバーのボタンをクリックしている場合、その入力を処理します。
if (GUI.changed)
{
print ("The toolbar was clicked");
if (selectedToolbar == 0)
{
print ("First button was clicked");
}
else
{
print ("Second button was clicked");
}
}
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
private int selectedToolbar = 0;
private string[] toolbarStrings = {"One", "Two"};
void OnGUI () {
// どのボタンがアクティブ化、このフレームでクリックされたかを決定します。
selectedToolbar = GUI.Toolbar (new Rect (50, 10, Screen.width - 100, 30), selectedToolbar, toolbarStrings);
// ユーザーがこのフレームで新しいツールバーのボタンをクリックしている場合、その入力を処理します。
if (GUI.changed)
{
Debug.Log("The toolbar was clicked");
if (0 == selectedToolbar)
{
Debug.Log("First button was clicked");
}
else
{
Debug.Log("Second button was clicked");
}
}
}
}
ユーザーが操作する前に、GUI コントロールが押された場合に、GUI.changedは真を返します。
Page last updated: 2012-11-13gui-Customization
GUI コントロールのカスタマイズ
関数コントロールはゲームに必要で、これらのコントロールの外観はゲームの見た目に非常に重要です。 UnityGUI で、多くの詳細を含むコントロールの外観を微調整できます。 コントロールの外観は、GUIStyles で決まります。 デフォルトでは、GUIStyle を定義せずに、コントロールを作成する際に、Unity のデフォルトの GUIStyle が適用されます。 このスタイルは、Unity の内部にあり、パブリッシュされたゲームでプロトタイピング、またはコントロールのスタイルを設定しないことを選択した場合に使用できます。
多くの各種 GUIStyles を連携させる際に、1 つの GUISkin 内ですべてのスタイルを定義できます。 GUISkin は、GUIStyle の集合に過ぎません。
Style が GUI コントロールの見た目をどのように変えるか
GUIStyles は、ウェブ ブラウザに対して、カスケーディング スタイル シート (CSS) を模倣するために設計されています。 スタイリングの個々の状態プロパティの差別化や、内容や外観の分離多くの各種 CSS 方法が採用されています。
コントロールが内容を定義すると、Style が外観を定義します。 これにより、通常のButtonのように見える関数のToggleのような組み合わせを作成できます。

別々のスタイルを設定した 2 つの Toggle コントロール
Skin と Style 間の差
前述の通り、GUISkin は、GUIStyle の集合です。 Style は GUI コントロールの外観を定義します。 Style を使用したい場合は、Skin を使用する必要はありません。

インスペクタに表示される 1 つの GUIStyle

インスペクタに表示される 1 つの GUISkin - 複数の GUIStyle が含まれていることを確認してください
!!スタイルの取り扱い
GUI コントロール関数はすべて、オプションの最終パラメータ、 コントロールの表示に使用する GUI スタイルを持っています。 主お略すると、Unity のデフォルトの GUI スタイルが使用されます。 これは、コントロールのタイプの名前を文字列として適用させることで機能します。そのため、GUI.Button()は、buttonスタイルを使用し、GUI.Toggle()はtoggleスタイルを使用します。最終パラメータとして、コントロールを指定することで、コントロールに対するデフォルトの GUI スタイルを無効にできます。
/* UnityGUI デフォルトの Styles 内の異なるスタイルで、デフォルトのコントロール スタイルを無効にします。*/
// JavaScript
function OnGUI () {
// ''box'' GUIStyle を使用するラベルを作成します。
GUI.Label (Rect (0,0,200,100), "Hi - I'm a label looking like a box", "box");
// ''toggle'' GUIStyle を使用するラベルを作成します。
GUI.Button (Rect (10,140,180,20), "This is a button", "toggle");
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
void OnGUI () {
// ''box'' GUIStyle を使用するラベルを作成します。
GUI.Label (new Rect (0,0,200,100), "Hi - I'm a label looking like a box", "box");
// ''toggle'' GUIStyle を使用するラベルを作成します。
GUI.Button (new Rect (10,140,180,20), "This is a button", "toggle");
}
}

前述のコードで作成されたコントロール
パブリック変数 GUIStyle の作成
パブリック GUIStyle 変数を宣言すると、スタイル内のすべての要素が Inspector に表示されます。 ここで、各種変数をすべて編集できます。
/* デフォルトのコントロール スタイルを自身で定義したスタイルで無効にします */
// JavaScript
var customButton : GUIStyle;
function OnGUI () {
// ボタンを作成します。 上記で定義した GUIStyle を使用するスタイルとして渡します。
GUI.Button (Rect (10,10,150,20), "I am a Custom Button", customButton);
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
public GUIStyle customButton;
void OnGUI () {
// ボタンを作成します。 上記で定義した GUIStyle を使用するスタイルとして渡します。
GUI.Button (new Rect (10,10,150,20), "I am a Custom Button", customButton);
}
}
各種スタイル要素の変更
GUIStyle を宣言している場合、インスペクタでそのスタイルを編集できます。 数多くの状態を定義し、どのタイプのコントロールにも適用できます。

スタイルはスクリプトごと、GameObject ベースごとに編集されます
コントロールには、指定したText Colorを適用する前に、Background色を割り当てる必要があります。
GUI スタイルの扱いに関する詳細については、GUIStyle Component Reference page ページを参照してください。
!!カメラの取り扱い
より複雑な GUI システムの場合、1 つの場所にスタイルの集合を保管する方がよいでしょう。 これが GUISkin の動作です。 GUI スキンは、複数の異なるスタイルを含み、主にすべての GUI コントロールへの完全な手直しを提供します。
GUI スキンの新規作成
GUI スキンを作成するには、メニューバーから を選択します。 これにより、Project フォルダに GUI スキンが作成されます。 インスペクタ内のスキンによって定義されたすべての GUI スタイルを表示するよう選択します。
GUI へのスキンの適用
作成したスキンを使用するには、OnGUI()関数内のGUI.skinを割り当てます。
/* 使用したいスキン片参照を含むプロパティを作成します */
// JavaScript
var mySkin : GUISkin;
function OnGUI () {
// 現在使用中のスキンにスキンを割り当てます。
GUI.skin = mySkin;
// ボタンを作成します。 これにより、デフォルトの''ボタン''スタイルが mySkin に割り当てたスキンから作成されます。
GUI.Button (Rect (10,10,150,20), "Skinned Button");
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
public GUISkin mySkin;
void OnGUI () {
// 現在使用中のスキンにスキンを割り当てます。
GUI.skin = mySkin;
// ボタンを作成します。 これにより、デフォルトの''ボタン''スタイルが mySkin に割り当てたスキンから作成されます。
GUI.Button (new Rect (10,10,150,20), "Skinned Button");
}
}
OnGUI()を 1 回呼び出すことで、好きなだけスキンを切り替えることができます。
/* 同じ OnGUI() 呼び出しによるスキン切り替えの例 */
// JavaScript
var mySkin : GUISkin;
var toggle = true;
function OnGUI () {
// 現在使用中のスキンにスキンを割り当てます。
GUI.skin = mySkin;
// トグルを作成します。 これにより、デフォルトの''ボタン''スタイルが mySkin に割り当てたスキンから作成されます。
toggle = GUI.Toggle (Rect (10,10,150,20), toggle, "Skinned Button", "button");
// Unity のデフォルトのスキンに現在のスキンを割り当てます。
GUI.skin = null;
// ボタンを作成します。 これにより、デフォルトの''ボタン''スタイルが組み込みスキンから作成されます。
GUI.Button (Rect (10,35,150,20), "Built-in Button");
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
public GUISkin mySkin;
private bool toggle = true;
void OnGUI () {
// 現在使用中のスキンにスキンを割り当てます。
GUI.skin = mySkin;
// トグルを作成します。 これにより、デフォルトの''ボタン''スタイルが mySkin に割り当てたスキンから作成されます。
toggle = GUI.Toggle (new Rect (10,10,150,20), toggle, "Skinned Button", "button");
// Unity のデフォルトのスキンに現在のスキンを割り当てます。
GUI.skin = null;
// ボタンを作成します。 これにより、デフォルトの''ボタン''スタイルが組み込みスキンから作成されます。
GUI.Button (new Rect (10,35,150,20), "Built-in Button");
}
}
Page last updated: 2012-11-13
gui-Layout
固定レイアウトと自動レイアウト
GUI の配置と整理を行うには、 固定と自動の 2 つの方法があります。 本ガイドにUnityGUI のすべての例では、固定レイアウトを使用してきました。 自動レイアウトを使用するには、コントロール関数を呼び出す際に、GUIの代わりに、GUILayoutを記述します。 他方に対して、1 つのレイアウト モードを使用する必要はありませんが、同じOnGUI()関数で一度に両方のモードを使用することができます。
固定レイアウトは、事前に設計されたインターフェースで作業する場合に使用します。 自動レイアウトは、前もって必要な要素の数が分からない場合、または各コントロールの配置について気にしたくない場合に使用します。 例えば、Save Game ファイルに基づいて、各種ボタンを作成している場合、描画するボタンの数が分からないとします。 この場合、自動レイアウトを使用する方が便利です。 ゲームの設計とどのようにインターフェースを表示したいかに強く依存しています。
自動レイアウトを使用した場合、2 つの大きな差があります。
- GUIの代わりに、GUILayoutを使用します。
- 自動レイアウト コントロールには、Rect()関数が必要です。
/* 自動レイアウトを使用した場合の 2 つの大きな差 */
// JavaScript
function OnGUI () {
// 固定レイアウト
GUI.Button (Rect (25,25,100,30), "I am a Fixed Layout Button");
// 自動レイアウト
GUILayout.Button ("I am an Automatic Layout Button");
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
void OnGUI () {
// 固定レイアウト
GUI.Button (new Rect (25,25,100,30), "I am a Fixed Layout Button");
// 自動レイアウト
GUILayout.Button ("I am an Automatic Layout Button");
}
}
コントロールの配置
使用しているレイアウト モードに応じて、コントロールを配置する場所やグループ化する方法をコントロールするための各種仕掛けがあります。 固定レイアウトでは、各種コントロールを Groups に分けることができます。 自動レイアウトでは、各種コントロールを Areas、Horizontal Groups および Vertical Groups$$ に分けることができます。
固定レイアウト - グループ
グループを固定レイアウト モードで利用できる手法です。 これにより、複数のコントロールを含む画面のエリアを定義できます。 GUI.BeginGroup()およびGUI.EndGroup()関数を使用することで、グループ内に置くコントロールを定義できます。 グループ内にあるすべてのコントロールは、画面の左上ではなく、グループの左上に基づいて配置されます。 このように、ランタイム時にグループを最配置する場合、グループ内のすべてのコントロールの相対位置は維持されます。
例として、画面上で複数のコントロールを中心に置くと非常に簡単です。
/* グループを使用して、画面上で複数のコントロールを中心に配置 */
// JavaScript
function OnGUI () {
// 画面中心にグループを配置します
GUI.BeginGroup (Rect (Screen.width / 2 - 50, Screen.height / 2 - 50, 100, 100));
// 今度は長方形がすべてグループに調整されます。 (0,0) はグループの左上です。
// グループが画面上のどこにあるかが分かるよう、ボックスを作成します。
GUI.Box (Rect (0,0,100,100), "Group is here");
GUI.Button (Rect (10,40,80,30), "Click me");
// 上で開始したグループを終了します。 これは覚えておいてください!
GUI.EndGroup ();
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
void OnGUI () {
// 画面中心にグループを配置します
GUI.BeginGroup (new Rect (Screen.width / 2 - 50, Screen.height / 2 - 50, 100, 100));
// 今度は長方形がすべてグループに調整されます。 (0,0) はグループの左上です。
// グループが画面上のどこにあるかが分かるよう、ボックスを作成します。
GUI.Box (new Rect (0,0,100,100), "Group is here");
GUI.Button (new Rect (10,40,80,30), "Click me");
// 上で開始したグループを終了します。 これは覚えておいてください!
GUI.EndGroup ();
}
}

上記の例では、画面解像度に関係なく、コントロールを中心に置きます
それぞれの内部で複数のグループをネストすることもできます。 これを行うと、各グループは、その内容をその親の空間に対して、切り取らせます。
/* 複数のグループを使用して、表示内容を切り取ります */
// JavaScript
var bgImage : Texture2D; // background image that is 256 x 32
var fgImage : Texture2D; // foreground image that is 256 x 32
var playerEnergy = 1.0; // a float between 0.0 and 1.0
function OnGUI () {
// 両方の画像を含む 1 つのグループを作成します。
// 最初の 2 つの画像を調整して、画面上の任意の場所に置きます。
GUI.BeginGroup (Rect (0,0,256,32));
// 背景画像を描画します。
GUI.Box (Rect (0,0,256,32), bgImage);
// 切り取られる 2 つ目のグループを作成します。
// 画像を切り取り、拡大したくないので、2 つ目のグループが必要になります。
GUI.BeginGroup (Rect (0,0,playerEnergy * 256, 32));
// 両方のグループを終了します
GUI.Box (Rect (0,0,256,32), fgImage);
// 両方のグループを終了します。
GUI.EndGroup ();
GUI.EndGroup ();
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
// background image that is 256 x 32
public Texture2D bgImage;
// foreground image that is 256 x 32
public Texture2D fgImage;
// a float between 0.0 and 1.0
public float playerEnergy = 1.0f;
void OnGUI () {
// 両方の画像を含む 1 つのグループを作成します。
// 最初の 2 つの画像を調整して、画面上の任意の場所に置きます。
GUI.BeginGroup (new Rect (0,0,256,32));
// 背景画像を描画します。
GUI.Box (new Rect (0,0,256,32), bgImage);
// 切り取られる 2 つ目のグループを作成します。
// 画像を切り取り、拡大したくないので、2 つ目のグループが必要になります。
GUI.BeginGroup (new Rect (0,0,playerEnergy * 256, 32));
// 両方のグループを終了します
GUI.Box (new Rect (0,0,256,32), fgImage);
// 両方のグループを終了します
GUI.EndGroup ();
GUI.EndGroup ();
}
}

切り取り動作を作成するため、グループを一緒にネストします。
自動レイアウト - エリア
エリアは、自動レイアウトでのみ使用されます。 GUILayout コントロールを含ませるための画面の制限された部分を定義するため、機能は固定レイアウトに似ています。 自動レイアウトの性質上、ほとんど常にエリアを使用することになります。
自動レイアウト モードでは、コントロールがコントロール レベルで描画される画面のエリアを定義しません。 コントロールは自動的に含んでいるエリアの左上に配置されます。 これは画面の場合があります。 自身で手動配置されたエリアを作成することもできます。 エリア内の GUILayout コントロールは、エリアの左上に配置されます。
/* エリアに配置されていないボタン、画面の途中のエリアで配置されたボタン。 */
// JavaScript
function OnGUI () {
GUILayout.Button ("I am not inside an Area");
GUILayout.BeginArea (Rect (Screen.width/2, Screen.height/2, 300, 300));
GUILayout.Button ("I am completely inside an Area");
GUILayout.EndArea ();
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
void OnGUI () {
GUILayout.Button ("I am not inside an Area");
GUILayout.BeginArea (new Rect (Screen.width/2, Screen.height/2, 300, 300));
GUILayout.Button ("I am completely inside an Area");
GUILayout.EndArea ();
}
}
エリア内で、ボタンやボックスのような表示要素は、その幅をエリアの全体の長さにまで延長します。
自動レイアウト - 水平と垂直グループ
自動レイアウトを使用すると、コントロールはデフォルトで次々と上から下に表示されます。 コントロールを配置する場所および配置方法をより繊細にコントロールする必要がある場合が多くあります。 自動レイアウト モードを使用している場合、水平および垂直グループのオプションがあります。
その他のレイアウト コントロール同様、個々の関数を呼び出して、これらのグループを開始または終了できます。 この関数は、GUILayout.BeginHoriztontal()、GUILayout.EndHorizontal()、GUILayout.BeginVertical()およびGUILayout.EndVertical()です。
水平グループ内のコントロールは常に水平に配置されます。 垂直グループ内のコントロールは常に垂直に配置されます。 これは、グループをそれぞれの内部でネスト化するまでは簡単に聞こえます。 これにより、想像できる構成で、任意の数のコントロールを配置できます。
/* ネスト化された水平と垂直グループの使用 */
// JavaScript
var sliderValue = 1.0;
var maxSliderValue = 10.0;
function OnGUI()
{
// 指定した GUI エリア内のすべてをラップします。
GUILayout.BeginArea (Rect (0,0,200,60));
// 1 つの水平グループを開始します。
GUILayout.BeginHorizontal();
// ボタンを通常通りに配置します。
if (GUILayout.RepeatButton ("Increase max\nSlider Value"))
{
maxSliderValue += 3.0 * Time.deltaTime;
}
// 2 つ以上のコントロールをボタン内に垂直に配置します。
GUILayout.BeginVertical();
GUILayout.Box("Slider Value: " + Mathf.Round(sliderValue));
sliderValue = GUILayout.HorizontalSlider (sliderValue, 0.0, maxSliderValue);
// グループとエリアを終了します。
GUILayout.EndVertical();
GUILayout.EndHorizontal();
GUILayout.EndArea ();
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
private float sliderValue = 1.0f;
private float maxSliderValue = 10.0f;
void OnGUI()
{
// Wrap everything in the designated GUI Area
GUILayout.BeginArea (new Rect (0,0,200,60));
// Begin the singular Horizontal Group
GUILayout.BeginHorizontal();
// Place a Button normally
if (GUILayout.RepeatButton ("Increase max\nSlider Value"))
{
maxSliderValue += 3.0f * Time.deltaTime;
}
// Arrange two more Controls vertically beside the Button
GUILayout.BeginVertical();
GUILayout.Box("Slider Value: " + Mathf.Round(sliderValue));
sliderValue = GUILayout.HorizontalSlider (sliderValue, 0.0f, maxSliderValue);
// End the Groups and Area
GUILayout.EndVertical();
GUILayout.EndHorizontal();
GUILayout.EndArea();
}
}

水平と垂直グループで配置された 3 つのコントロール
一部のコントロールを制御するための GUILayoutOptions の使用
GUILayoutOptions を使用して、自動レイアウト パラメータの一部を無効にできます。 これは、GUILayout コントロールの最終パラメータを渡すことで行います。
上記の例のエリアでは、ボタンはその幅をエリアの最大幅に延長します。 必要な場合はこれを無効にできます。
/* 自動レイアウト コントロール プロパティを無効にするための GUILayoutOptions の使用 */
//JavaScript
function OnGUI () {
GUILayout.BeginArea (Rect (100, 50, Screen.width-200, Screen.height-100));
GUILayout.Button ("I am a regular Automatic Layout Button");
GUILayout.Button ("My width has been overridden", GUILayout.Width (95));
GUILayout.EndArea ();
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
void OnGUI () {
GUILayout.BeginArea (new Rect (100, 50, Screen.width-200, Screen.height-100));
GUILayout.Button ("I am a regular Automatic Layout Button");
GUILayout.Button ("My width has been overridden", GUILayout.Width (95));
GUILayout.EndArea ();
}
}
考えられる GUILayoutOptions の完全なリストに関しては、GUILayoutOption Scripting Reference page を参照してください。
Page last updated: 2012-11-13gui-Extending
ニーズに合うよう、UnityGUI を活用し、拡張する方法がいくつかあります。 コントロールは、混合し、作成できますが、GUI へのユーザー入力がどのように処理されるかに影響する多くの機能があります。
合成物コントロール
GUI では、2 種類のコントロールが常に一緒に表示される状況が存在する場合があります。 例えば、いくつかの水平スライダのあるキャラクター作成画面を作成しているとします。 これらのスライダはすべて、自身を識別するためのラベルが必要なため、プレイヤーは何を調整しているかが分かります。 この場合、GUI.HorizontalSlider()を呼び出して、GUI.Label()のすべての呼び出しと提携するか、ラベルとスライダの両方を含む Compound Control を作成できます。
/* ラベルとスライダ合成物コントロール */
// JavaScript
var mySlider : float = 1.0;
function OnGUI () {
mySlider = LabelSlider (Rect (10, 100, 100, 20), mySlider, 5.0, "Label text here");
}
function LabelSlider (screenRect : Rect, sliderValue : float, sliderMaxValue : float, labelText : String) : float {
GUI.Label (screenRect, labelText);
screenRect.x += screenRect.width; // <- スライダをラベルの右端に押します
sliderValue = GUI.HorizontalSlider (screenRect, sliderValue, 0.0, sliderMaxValue);
return sliderValue;
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
private float mySlider = 1.0f;
void OnGUI () {
mySlider = LabelSlider (new Rect (10, 100, 100, 20), mySlider, 5.0f, "Label text here");
}
float LabelSlider (Rect screenRect, float sliderValue, float sliderMaxValue, string labelText) {
GUI.Label (screenRect, labelText);
// <- スライダをラベルの右端に押します
screenRect.x += screenRect.width;
sliderValue = GUI.HorizontalSlider (screenRect, sliderValue, 0.0f, sliderMaxValue);
return sliderValue;
}
}
この例では、LabelSlider()を呼び出し、正しい引数を渡すと、水平スライダのあるラベルを提供します。 合成物コンポーネントを記述する際、インタラクティブとなるよう、必ず関数の最後で正しい値を返すようにしてください。

上記合成物コントロールは常にこのコントロールのペアを作成します
スタティック合成物コントロール
Static 関数を使用することで、自身の内蔵型合成物コントロールの集合全体を作成できます。 このように、使用したい同じスクリプトで関数を宣言する必要はありません。
/* これは、合成物コントロールと呼ばれます */
// JavaScript
static function LabelSlider (screenRect : Rect, sliderValue : float, sliderMaxValue : float, labelText : String) : float {
GUI.Label (screenRect, labelText);
screenRect.x += screenRect.width; // <- スライダをラベルの右端に押します
sliderValue = GUI.HorizontalSlider (screenRect, sliderValue, 0.0, sliderMaxValue);
return sliderValue;
}
// C#
using UnityEngine;
using System.Collections;
public class CompoundControls : MonoBehaviour {
public static float LabelSlider (Rect screenRect, float sliderValue, float sliderMaxValue, string labelText) {
GUI.Label (screenRect, labelText);
// <- スライダをラベルの右端に押します
screenRect.x += screenRect.width;
sliderValue = GUI.HorizontalSlider (screenRect, sliderValue, 0.0f, sliderMaxValue);
return sliderValue;
}
}
と呼ばれるスクリプトに上記の例を保存することで、ただCompoundControls.LabelSlider()を入力し、引数を渡すことで、他のスクリプトからLabelSlider()関数を呼び出すことができます。
詳細な合成物コントロール
合成物コントロールでよりクリエイティブにすることができます。 好きなように配置したり、グループ化することができます。 次の例では、再利用可能な RGB スライダを作成します。
/* RGB スライダ合成物コントロール */
// JavaScript
var myColor : Color;
function OnGUI () {
myColor = RGBSlider (Rect (10,10,200,10), myColor);
}
function RGBSlider (screenRect : Rect, rgb : Color) : Color {
rgb.r = GUI.HorizontalSlider (screenRect, rgb.r, 0.0, 1.0);
screenRect.y += 20; // <- 次のコントロールを若干下に移動して、重なりを避けます。
rgb.g = GUI.HorizontalSlider (screenRect, rgb.g, 0.0, 1.0);
screenRect.y += 20; // <- 次のコントロールを若干下に移動して、重なりを避けます。
rgb.b = GUI.HorizontalSlider (screenRect, rgb.b, 0.0, 1.0);
return rgb;
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
public Color myColor;
void OnGUI () {
myColor = RGBSlider (new Rect (10,10,200,10), myColor);
}
Color RGBSlider (Rect screenRect, Color rgb) {
rgb.r = GUI.HorizontalSlider (screenRect, rgb.r, 0.0f, 1.0f);
// <- 次のコントロールを若干下に移動して、重なりを避けます。
screenRect.y += 20;
rgb.g = GUI.HorizontalSlider (screenRect, rgb.g, 0.0f, 1.0f);
// <- 次のコントロールを若干下に移動して、重なりを避けます。
screenRect.y += 20;
rgb.b = GUI.HorizontalSlider (screenRect, rgb.b, 0.0f, 1.0f);
return rgb;
}
}

前述の例で作成された RGB スライダ
今度は、他の合成物コントロール内で合成物コントロールをどのように作成できるかを示すため、合成物コントロールを互いの上に作成します。 これを行うため、上記のような RGB スライダを新規作成しますが、そのため、LabelSlider を使用します。 このように、ラベルがどのスライダがどの色に対応しているかを伝えさせるようにします。
/* RGB ラベル スライダ合成物コントロール */
// JavaScript
var myColor : Color;
function OnGUI () {
myColor = RGBLabelSlider (Rect (10,10,200,20), myColor);
}
function RGBLabelSlider (screenRect : Rect, rgb : Color) : Color {
rgb.r = CompoundControls.LabelSlider (screenRect, rgb.r, 1.0, "Red");
screenRect.y += 20; // <- 次のコントロールを若干下に移動して、重なりを避けます
rgb.g = CompoundControls.LabelSlider (screenRect, rgb.g, 1.0, "Green");
screenRect.y += 20; // <- 次のコントロールを若干下に移動して、重なりを避けます。
rgb.b = CompoundControls.LabelSlider (screenRect, rgb.b, 1.0, "Blue");
return rgb;
}
// C#
using UnityEngine;
using System.Collections;
public class GUITest : MonoBehaviour {
public Color myColor;
void OnGUI () {
myColor = RGBSlider (new Rect (10,10,200,30), myColor);
}
Color RGBSlider (Rect screenRect, Color rgb) {
rgb.r = CompoundControls.LabelSlider (screenRect, rgb.r, 1.0f, "Red");
// <- 次のコントロールを若干下に移動して、重なりを避けます。
screenRect.y += 20;
rgb.g = CompoundControls.LabelSlider (screenRect, rgb.g, 1.0f, "Green");
// <- 次のコントロールを若干下に移動して、重なりを避けます。
screenRect.y += 20;
rgb.b = CompoundControls.LabelSlider (screenRect, rgb.b, 1.0f, "Blue");
return rgb;
}
}

前述のコードで作成された合成物 RGB ラベル スライダ
gui-ExtendingEditor
はじめに
Editor Windows を通じて、Unity 内でカスタムの設計ツールを作成できます。 EditorWindow の代わりに、MonoBehaviour から派生するスクリプトは、GUI/GUILayout と EditorGUI/EditorGUILayout コントロールの両方を利用できます。 また、Custom Inspectors を使用して、GameObject Inspector でこれらの GUI コントロールを晒すことができます。
エディタ ウィンドウ
アプリケーションで多くのカスタム ウィンドウを作成できます。 これらは、インスペクタやシーン、またはその他の組み込み内容と同様に動作します。 これは、ゲームのサブシステムにユーザー インターフェースを追加するのに便利です。

Serious Games Interactive社のカットシーンのアクションを記述するのに使用されるカスタム エディタインターフェース
カスタムのエディタ ウィンドウに、次の簡単なステップを盛りこませます。
- エディタ ウィンドウから派生するスクリプトを作成します。
- コードを使用して、ウィンドウにそれ自体を表示させます。
- ツールに GUI コードを実行します。
エディタ ウィンドウからの派生
エディタ ウィンドウを作成するには、スクリプトをEditorと呼ばれるフォルダ内に格納する必要があります。 エディタ ウィンドウから派生するこのスクリプトにクラスを作成します。 次に、内部の OnGUI 関数に GUI コントロールを記述します。
class MyWindow extends EditorWindow {
function OnGUI () {
// 実際のウィンドウ コードは以下
}
}
MyWindow.js - プロジェクト内のEditorと呼ばれるフォルダに置かれます。
ウィンドウの表示
画面上にウィンドウを表示するには、それを表示するメニュー項目を作成します。 これは、MenuItem プロパティによって起動される関数を作成 することで行われます。
Unity では、ウィンドウのリサイクルがデフォルトの動作です (そのため、メニュー項目を再度選択すると、既存のウィンドウが遅くなります)。 これは、以下のように関数 EditorWindow.GetWindow を使用して行われます。
class MyWindow extends EditorWindow {
@MenuItem ("Window/My Window")
static function ShowWindow () {
EditorWindow.GetWindow (MyWindow);
}
function OnGUI () {
// 実際のウィンドウ コードは以下
}
}
MyWindow の表示
これにより、起動間の位置を保存し、カスタムのレイアウトで使用できる、標準の、ドッキング可能なエディタ ウィンドウが作成されます。作成したものをより制御するのに、GetWindowWithRect を使用できます。
ウィンドウの GUI の実施
ウィンドウの実際の内容は、OnGUI 関数を実施することでレンダリングされます。 ゲーム内の GUI (GUIとGUILayout) に使用するのと同じ UnityGUI クラスを使用できます。 さらに、エディタ専用のクラスのEditorGUIとEditorGUILayoutにある追加の GUI コントロールを提供します。 これらのクラスは、通常のクラスですでに使用できるコントロールに追加されるため、自由にミックスおよび一致させることができます 。
以下のC#コードはカスタムエディタウィンドウでどうやってGUI要素を加えるかのサンプルです。:
using UnityEditor;
using UnityEngine;
public class MyWindow : EditorWindow
{
string myString = "Hello World";
bool groupEnabled;
bool myBool = true;
float myFloat = 1.23f;
// Add menu item named "My Window" to the Window menu
[MenuItem("Window/My Window")]
public static void ShowWindow()
{
//Show existing window instance. If one doesn't exist, make one.
EditorWindow.GetWindow(typeof(MyWindow));
}
void OnGUI()
{
GUILayout.Label ("Base Settings", EditorStyles.boldLabel);
myString = EditorGUILayout.TextField ("Text Field", myString);
groupEnabled = EditorGUILayout.BeginToggleGroup ("Optional Settings", groupEnabled);
myBool = EditorGUILayout.Toggle ("Toggle", myBool);
myFloat = EditorGUILayout.Slider ("Slider", myFloat, -3, 3);
EditorGUILayout.EndToggleGroup ();
}
}
この例の結果はこのような感じに表示されます:

Custom Editor Window created using supplied example.
詳細については、 EditorWindow page のサンプルおよび文書を参照してください。
カスタム インスペクタ
ゲーム作成の速度を上げるためのキーは、よく使用されるコンポーネントにカスタムのインスペクタを作成することです。 例示のため、常に一点でオブジェクトを表示させ続ける非常に簡単な以下のスクリプトを使用します。
var lookAtPoint = Vector3.zero;
function Update () {
transform.LookAt (lookAtPoint);
}
LookAtPoint.js
これにより、オブジェクトを世界空間の点に向けさせます。 かっこよく演出しましょう!
エディタでこれを適切に機能させるための最初のステップは、ゲームをネスト化していない時もスクリプトを実行させることです。 ExecuteInEditMode 属性をそこに追加することでこれを実行します。
@script ExecuteInEditMode()
var lookAtPoint = Vector3.zero;
function Update () {
transform.LookAt (lookAtPoint);
}
このスクリプトをメイン カメラに追加し、シーン ビューにドラッグしてみてください。
カスタム エディタの作成
これは文句のつけようがないのですが、インスペクタをカスタマイズすることで、インスペクタをより適した形にできます。 これを行うには、インスペクタにエディタを作成する必要があります。 Editorフォルダに LookAtPointEditor という JavaScript を作成します。
@CustomEditor (LookAtPoint)
class LookAtPointEditor extends Editor {
function OnInspectorGUI () {
target.lookAtPoint = EditorGUILayout.Vector3Field ("Look At Point", target.lookAtPoint);
if (GUI.changed)
EditorUtility.SetDirty (target);
}
}
このクラスはエディタから派生させる必要があります。 @CustomEditor 属性は、Unity にこの属性がエディタとして動作する対象であるコンポーネントを通知します。
Unity がインスペクタを表示する時は常に OnInspectorGUI 内のコードが実行されます。 ここに GUI コードを入れることができます。これは、ゲームに対して OnGUI が機能するように機能しますが、インスペクタ内で実行されます。 エディタは、検査されているオブジェクトにアクセスするのに使用できる対象のプロパティを定義します。
GUI.changed をチェックすることで、いずれかの値が変更された場合、EditorUtility.SetDirty コードが実行されます。
この場合、以下のように、トランスフォーム インスペクタで使用されるように Vector3 フィールドの 1 つを作成します。

輝いているインスペクタができました
ここでできることはたくさんありますが、今はこれで十分です。他にもっと大切なことがあります。
シーン ビューの追加
カスタムエディタで OnSceneGUI を実行することで、シーン ビューにさらにコードを追加できます。 この場合、2 つ目の位置ハンドルのセットを追加し、ユーザーがシーン ビューに見る点をドラッグできるようにします。
@CustomEditor (LookAtPoint)
class LookAtPointEditor extends Editor {
function OnInspectorGUI () {
target.lookAtPoint = EditorGUILayout.Vector3Field ("Look At Point", target.lookAtPoint);
if (GUI.changed)
EditorUtility.SetDirty (target);
}
function OnSceneGUI () {
target.lookAtPoint = Handles.PositionHandle (target.lookAtPoint, Quaternion.identity);
if (GUI.changed)
EditorUtility.SetDirty (target);
}
}
OnSceneGUI は、シーン ビューで実行される点を除くと、OnInspectorGUI と同じく機能します。 編集インターフェースの作成に役立つ、Handles クラスで定義された関数を使用できます。 そこでの関数はすべて、3D シーン ビューで作業するために設計されています。
2D GUI オブジェクト (GUI、EditorGUI およびその仲間) を入れるたい場合、Handles.BeginGUI() と Handles.EndGUI() の呼び出しにこれらをラップする必要があります。
Page last updated: 2012-11-13Network Reference Guide
ネットワーキングは、非常に大きく、詳細なトピックです。 Unityでは、非常に簡単にネットワーク機能を作成できます。 しかし、それでもネットワーク ゲームの作成に関わる広がりや幅を理解することが最適です。 以降のページでは、ネットワーキング コンセプトの基本原理、およびこれらのコンセプトを使用する際の Unity 固有の実行について説明します。 これまでネットワーク ゲームの作成経験がない場合は、作成前に本書をよくお読みになることを強くお勧めします。
High Level Overview
本項では、ネットワーキングに関わるすべてのコンセプトの概要を述べます。 より詳細なトピックの導入部となります。
Networking Elements in Unity
本ガイドの本項では、上記の考えの Unity での実行について記載されています。
Network View
ネットワーク ビューは、ネットワーク上でデータを共有するのに使用されるコンポーネントです。 これは理解する必要があります。 このページで詳細に説明します。
RPC Details
RPC は、遠隔手続呼び出し (Remote Procedure Call) の略です。 これは、リモート マシンでの関数の呼び出し方法です。 これは、クライアントによるサーバー上の関数の呼び出し、またはサーバーによるすべてまたは特定のクライアント上の関数の呼び出しのいずれかになります。本ページでは、RPC の詳細について説明します。
State Synchronization
State Synchronization は、ネットワーク上で実行している 2 つ以上のゲーム インスタンスで特定のデータ セットを通常更新する方法です。
Network Instantiate
ネットワーキングに関して難しい課題の 1 つは、オブジェクトの所有です。 誰が何をコントロールしますか? Network Instantiation によってこのロジックが決まります。 このページでは、その方法について説明します。 より多くのコントロールが必要な場合における、複雑な代替手段についても説明します。
Master Server
マスター サーバーは、サーバーがクライアントにその存在を宣伝できるゲーム ロビーのようなものです。 また、ファイアウォールまたは自宅でのネットワークからの通信を可能にする解決法でもあります。 必要な場合、NAT punchthrough という手法を使用して (ファシリテータの支援を受け)、プレイヤーが相互に常時接続できるようにすることもできます。 本ページでは、マスター サーバーの使用方法について説明します。
Minimizing Bandwidth
データを共有する場所および方法を選択するたびに、ゲームが使用する帯域幅が影響を受けます。 本ページでは、帯域幅の使用の詳細および最小限に抑えるための方法を共有します。

iOS
Special details about networking on iOS
iOS でのネットワーキングの起動

Android
Page last updated: 2012-11-13Networking on iOS
iOS と Android
携帯機器 (iOS / Android) のネットワーク機能
Unity iOS/Android ネットワーキング エンジンは、デスクトップデバイス用のネットワーク機能と完全に互換性があるため、ネットワーキング コードはすべて iOS/Android 機器で機能します。 しかし、コードを再設計し、Wi-Fi または携帯ネットワーク向けに適用させたい場合があると思います。 さらに、携帯に応じて、高パフォーマンスの Wi-Fi ネットワークにおいてさえも、携帯機器間 (または携帯機器とデスクトップ間で) のピングが約 40〜60 ms となるため、ネットワーキング チップも ネックになることがあります。
ネットワーキングを使用することで、Wi-Fi またはセルラー ネットワークでデスクトップ と iOS から同時に表示できるゲームを作成できます。 後者の場合、ゲーム サーバーは、パブリック IP を持つ必要があります (インターネットを通じてアクセスできます)。
注意: EDGE / 3G データ接続はデータが送信されないと、すぐにスリープ状態になります。 従って、ネットワーキングを呼び起こす必要があります。 Unity のネットワーク接続を行う前に、サイトに WWW クラス 接続を行います (および終了するまで停止します)。
Page last updated: 2012-11-13net-HighLevelOverview
このセクションではUnityのネットワーキングアーキテクチャでゲーム開発をするうえで理解すべき一般的なネットワーキングの概念についてカバーします。
ネットワーキングとは何か
ネットワーキングは2つ以上のコンピュータ間の通信です。基本的な考え方はクライアント(client)(情報をリクエストするコンピュータ)とサーバ(server)(情報のリクエストに応答するコンピュータ)との関係です。サーバは全てのクライアントから使用されるホストマシンか、単にプレイヤーがゲームを実行しているマシン(クライアント)がさらに他のプレイヤーのサーバの役割を果たす場合の両方があります。サーバーとして確立クライアント接続が行われると、二つのコンピュータはゲームプレイに応じてデータのやりとりを行うことが出来ます。
ネットワークゲームを作成するには非常に具体的な詳細事項を気にする必要があります。ネットワークのアクションをUnityで設計し作成することは容易ですが、それでも複雑さはあります。Unityの重要な設計方針としてネットワーキングについて出来るかぎり堅牢かつ柔軟性を持たせる、ということがあります。すなわちゲームクリエイターのあなたが責任をもたないといけないこととして、他のゲームエンジンでは自動化されているものの堅牢性が劣る事項も含まれる、ということです。あなたの判断はゲームデザインに大きな影響があり、設計段階の早い時点で決定することが大事です。ネットワーキングの概念を理解することでゲームデザインの設計が改善され実装で発生する問題を回避できます。
ネットワーキングに対するアプローチ
ネットワークゲームの構築にあたり良く知られていて、かつ検証も十分に行われている2つのアプローチがあり、権限サーバ(Authoritative Server)と非権限サーバ(Non-Authoritative Server)方式と呼ばれます。両方のアプローチはサーバのクライアント接続の確立と相互のデータ受け渡しを前提としています。両方でエンドユーザのプライバシーが確保され、クライアント同士が直接接続されることはなく、またIPアドレスも他のクライアントに公開されません。
権限サーバ
権限サーバのアプローチはゲームの世界の氏ミューれションがサーバで行われることを前提としていて、ゲームルールの適用やプレイヤークライアントからの入力処理も行います。各クライアントは入力の送信(キー入力やアクションのリクエストといった形で)をサーバに対して行い、継続的にゲームの現在の状態をサーバから取得します。クライアントはゲームの様態に直接変更を反映することは行わず、その代わりにサーバにやりたいことのリクエストを行い、サーバがリクエストをハンドリングして暗いなとに結果として何が起こったかを応答します。
基本的に、プレイヤーが何をしたかと、実際に起きることの間でレイヤーの分離があります。これによりサーバが全てのクライアントのリクエストを聞いてからゲームの状態を更新すればよいか判断することが出来ます。
このアプローチのメリットはクライアントがチートすることを相当に難しくすることです。例えば、クライアントがサーバに敵を殺したことを伝えてチートすることは出来ず、その判断を自ら行うことが出来ません。サーバに対して武器から発射をsちえ、その先はサーバで殺せたかを判断することになります。
権限サーバのもう一つの例は、物理挙動に依存したマルチプレイヤーのゲームです。もし各クライアントがそれぞれの物理シミュレーションにもとづいて実行されると、クライアント同士でわずかな差異が徐々に同期がとれなくなります。もしシミュレーションが中央サーバでハンドリングされれば整合性が保証されます。
権限サーバの潜在的な欠点はネットワーク上でメッセージが伝達されるのに時間がかかることです。もしプレーヤーがあるコントロールを前後に動かす場合に、ネットワーク上のサーバからの応答が0.1秒かかるとしたら、遅延の発生がプレイヤーに認識されてしまいます。一つの解決策はクライアント側予測(client-side prediction)を行うことです。このテクニックの本質はクライアントがゲームをローカルのバージョンとして実行しつつ、サーバからサーバが認証したバージョンを更新情報として受け取ることです。通常は、シンプルなゲームアクションでのみ使用されるべきであり、ゲームの状態にとって重要な情報の更新では使用してはいけません。例えば、プレイヤーに敵が殺された情報を報告して、その後にサーバーがこの判断を覆すのは賢くありません。
クライアント側予測は高度なトピックなので、このガイドでカバーしませんが、さらに詳細を調べるならば本やウェブ上のリソースがあります。
権限サーバは非権限サーバに比べて処理オーバーヘッドが高くなります。サーバが全ての変更をゲームの状態に対する変更をハンドリングすることが必要な場合、この処理負荷はクライアントで分散させることが出来ます。
非権限サーバ
非権限サーバはユーザ入力の結果を制御しません。クライアント自身がユーザ入力およびゲームロジックをローカルで処理して決まったアクションをサーバに送信します。サーバは全てのアクションをゲームの世界の状態と同期します。これにより、設計の観点から実装しやすく、サーバは単にクライアント間のメッセージリレーを行うのみであり、クライアントが行う以上の追加の処理がありません。
予測(prediction)手法の類は必要とされず、クライアントは全ての物理挙動やイベントのハンドリングを行い、発生した内容をサーバにリレーします。オブジェクトに対するオーナーとなり、ネットワーク上でそれらのオブジェクトに対する修正はそのエージェントのみが行うものと限定されます。
ネットワーク通信の手法
ネットワークゲームの基本アーキテクチャがカバーできたところで、下位概念においてクライアントやサーバが互いに通信する方法を見ていきます。
二つの関連した手法があり、リモートプロシージャコール(Remote Procedure Callsと状態同期(State Synchronization)と呼ばれます。どのようなゲームのどのようなタイミングで使用するのも一般的なことです。
リモートプロシージャコール
リモートプロシージャコール(RPC)はネットワーク上で他のコンピュータ上で関数の実行するために使用され、ここで「ネットワーク」はクライアントとサーバが同じコンピュータ上で実行されている場合にはそれらの間のメッセージチャネルを意味します。クライアントはサーバにRPCを送信することが出来、サーバはRPCをひとつ以上のクライアントに送信出来ます。もっとも一般的な使用方法としては、頻繁でないアクションに使用されます。例えばクライアントがドアをあけるためのスイッチを入れた場合、サーバにRPCを送信してドアが開けられたことを知らせます。サーバは別のRPCを全てのクライアントに送信して、それぞれのローカル関数を実行し同じドアを開けるようにします。これらは個別のイベントの管理と実行に使用します。
状態同期
状態同期は継続的に変更がかかるデータを共有するために使用します。もっとも良い例としてはアクションゲームにおけるプレイヤー位置です。プレイヤーは継続的に移動、走り回り、ジャンプしている、等の場合です。ネットワーク上の他の全てのプレイヤー、すなわちローカルでプレイヤー自身をコントロールしている場合も含めて、どこに位置しているかおよび何をしているか知る必要があります。このプレイヤー位置のデータを継続的にリレーすることで、ゲームは正確にその位置を他のプレイヤーに提示することが出来ます。
この種のデータはネットワーク上を定期的かつ頻繁に送信されます。このデータは時間依存し、ひとつのマシンから次のマシンに渡るまで時間がかかるため、出来るかぎり送信データの量を削減することは重要です。もっと簡単にいうと、状態同期は自然と多くの帯域幅を必要とするため、できるかぎり最小の帯域幅を使用するようにすべきです。
サーバとクライアントを相互接続する
サーバとクライアントを接続することは複雑なプロセスとなりえます。マシンはプライベート、パブリックIPアドレスを持ち、ローカルや外部ファイアウォールがアクセスをブロックします。Unityネットワーキングはできるかぎり多くのシチュエーションをハンドリングしますが、絶対的な解決策はありません。
プライベートアドレスはインターネットから直接アクセスすることができないIPアドレス(ネットワークアドレス変換(Network Address Translation)、あるいは使用する手法からNATアドレスと呼ばれます)です。簡単に説明すると、プライベートアドレスはローカルルータを通過し、そこでアドレスがパブリックアドレスに変換されます。これにより、多くのプライベートアドレスを持ったマシンはひとつのパブリックアドレスを使用してインターネットで通信できます。これhあ誰かインターネット上で他のプライベートアドレスと通信を始めたい場合でなければ何ら問題はありません。この通信はルータのパブリックアドレスを通して行われる必要があり、次にプライベートアドレスにメッセージを渡す必要があります。NATパンチスルー(NAT punchthrough)というテクニックを使って、ファシリテータ(facilitator)として知られる共有サーバを仲介としてプライベートアドレスをパブリックアドレスから到達できるようにします。プライベートアドレスがまずファシリテータにコンタクトを行い、そこからローカルルータに穴を空けることで実現されます。(NATパンチスルーの詳細は現実にはより複雑なものであることに留意して下さい)
パブリックアドレスはより直感的なものです。ここでの主要な課題は接続が内部、あるいは外部ファイアウォールによりブロックされることです(内部ファイアフォールは守ろうとしているコンピュータでローカル上実行されています)。内部ファイアウォールにおいては、ユーザは特定のポートを開放し、ゲームサーバをアクセス可能にするよう求められます。外部ファイアウォールは、対照的にユーザの管理下にありません。UnityはNATパンチスルーを使用して外部ファイアウォールを通してアクセスを試みることは出来ますが、成功は保証されません。テスト結果によれば、実際に一般的に機能することは示されているものの、正式な調査でこの発見が正しいと確認できるものはありません。
すでに述べた接続の問題はサーバとクライアントに対して各々違って影響します。クライアントリクエストは発信のトラフィックのみであるため、比較的直感的です。もしクライアントがパブリックアドレスを持つ場合、ほぼ常に成功し、なぜかというと発信のトラフィックまでブロックするのは一般に企業ネットワークなど極めて厳しいアクセス制限をかけているものに限られるためです。もしクライアントがプライベートアドレスを持つ場合、NATパンチスルーができないプライベートアドレスを持つサーバ以外の全てのサーバと接続することが出来ます。(この詳細については後述します)。サーバ側はもっと複雑であり、なぜかというとサーバは未知の発信元から受信を行う必要があるためです。パブリックアドレスでは、サーバはインターネット向けにゲームポートを開放されている必要があり(すなわちファイアウォールでブロックされない)、そうでなければクライアントから通信を受け付けることができず使用することが出来ません。もしサーバがプライベートアドレスをもつ場合、NATパンチスルーを使用して接続を許可し、クライアントもまたNATパンチスルーを許可していないと接続ができない。
Unityはこれらの異なる接続状況をテストするツールを提供しています。接続が確立できることが確認できるとするならば、2つの手法があり、直接接続(クライアントがサーバのDNS名やIPアドレスを知る必要がある)かマスターサーバー(Master Server)を通して通信です。マスターサーバにより、特定のゲームサーバーの情報についてあらかじめ知る必要がないクライアントに対して、サーバが存在を示すことが出来ます。
ネットワーク帯域幅の最小化
複数のクライアント間で状態同期を行う場合、必ずしも全ての詳細情報がないと、オブジェクトが同期させられないわけではりません。例えば、キャラクターのアバターを同期させる場合はクライアント間を送信するのは位置と回転の情報で十分です。キャラクター自身がもっと複雑で、深いTransform階層を持っていたとしても、階層全体に関する情報を共有する必要性はありません。
ゲームにおけるたくさんの情報は十分に静的であり、クライアントはそれを送信も同期もする必要がありません。機能の多くは、頻繁でない、あるいはワンタイムのRPCコールだけで十分実現できます。どのゲームのインストールでも存在するデータについて有効に活用し、クライアントはできるかぎり自前で処理できるようにします。例えば、テクスチャやメッシュといったアセットは全てのインストールで存在することが分かっていて変更も通常はされないため、同期を取る必要がありません。これは簡単な例ですが、クライアント間で共有されるべき、まさにクリティカルなデータが何であるか考えるには十分です。そういうデータのみを共有すべきです。
ネットワークゲームを初めて作成する場合は特に、具体的に何を共有すべきで、何を共有すべきでないかを判断することは難しいかもしれません。レベル名をつけた一度のRPCコールで、全てのクライアントが指定されたレベル全体をロードすることが出来て、自動的にネットワーク要素を付け加えることが出来ることに留意して下さい。ゲームにおいて、クライアントができる限り自己完結できるように構成することで、帯域幅は最小化できます。
マルチプレイヤーゲームのパフォーマンス
サーバ自身の物理的な場所とパフォーマンスは、その上で実行されるゲームのプレイ品質に大きく影響します。サーバと別大陸にいるクライアントは大きな遅延を経験するかもしれません。これはインターネットの物理的制約であり、実際の解決方法としてはサーバと使用するクライアントができるかぎり近くに位置することであり、せめて同じ大陸上にあることです。
追加のリソース
ネットワーキングについて、学習できる追加のリソースを集めました:
- http://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking
- http://developer.valvesoftware.com/wiki/Lag_Compensation
- http://developer.valvesoftware.com/wiki/Working_With_Prediction
- http://www.gamasutra.com/resource_guide/20020916/lambright_01.htm
net-UnityNetworkElements
Unity のネイティブなネットワーキングは、前のページで説明位したものすべてをサポートしています。 サーバー作成およびクライアント接続、接続されたクライアント間でのデータの共有、どのプレイヤーがどのオブジェクトをコントロールするか、ネットワーク設定のバリエーションはすべて、ボックスからサポートされます。 このページでは、これらのネットワーキング作業の Unity 固有のUnity の実装について説明します。
サーバーの作成
ネットワーク ゲームのプレイを開始する前に、通信する各種コンピュータを決定する必要があります。 これを行うには、サーバーを作成する必要があります。 サーバーは、ゲームも実行しているマシンであったり、ゲームに参加している専用のマシンでもあります。 サーバーを作成するには、スクリプトから [ScriptRef:Network.InitializeServer.html | Network.InitializeServer()]] を呼び出します。 クライアントとして、既存のサーバーに接続したい場合は、代わりに Network.Connect() を呼び出します。
一般的に、Network class 全体に習熟するのに非常に便利なのが分かります。
ネットワーク ビューを使用しての通信
Network View は、ネットワーク上でデータを送信するコンポーネントです。 ネットワーク ビューにより、GameObject は RPC コールまたは状態同期を使用してデータを送信できます。 ネットワーク ビューの使用方法は、ゲームのネットワーキング動作がどのように機能するかを決定します。 ネットワーク ビューにはほとんどオプションはありませんが、ネットワーク ゲームにとっては非常に重要です。
ネットワーク ビューの使用に関する詳細は、Network View Guide page および Component Reference page を参照してください。
リモート プロシージャ コール
リモート プロシージャ コール (RPC) は、ネットワーク ビューを含む GameObject に追加されるスクリプトで宣言される関数です。 ネットワーク ビューは、RPC 関数を含むスクリプトを示す必要があります。 次に、その GameObject 内のスクリプトから、RPC 関数を呼び出すことができます。
Unity での RPC の使用に関する詳細については、RPC Details page を参照してください。
状態同期
状態同期は、すべてのゲーム クライアント上でデータを継続的に共有します。 このように、プレイヤーの位置をすべてのクライアント上で同期できるので、データが実際ネットワーク上で配信される際に、ローカルでコントロールされているように見えます。 GameObject 内で状態を同期させるには、そのオブジェクトにネットワーク ビューを追加し、何を観察するかを伝える必要があります。 観測されたデータはが、ゲーム内のすべてのクライアント上で同期されます。
Unity での状態同期の使用に関する詳細については、State Synchronization page を参照してください。
Network.Instantiate()
Network.Instantiate()により、ナチュラルで簡単な方法で全てのクライアント上のプレハブをインスタンス化できます。 基本的に、これは、`Instantiate()ですが、すべてのクライアント上でのインスタンス化を実行します。
内部では、Network.Instantiate は、すべてのクライアントで実行される単にバッファされた RPC コールにすぎません (ローカルでも)。 NetworkViewID を割り当て、すべてのクライアント上で同期させるインスタンス化プレハブに割り当てます。
詳細については、Network Instantiate ページを参照してください。
NetworkLevelLoad()
データの共有、クライアント プレイヤーの状態、レベルのロードなどの処理は若干難しい場合があります。 Network Level Load ページに、このタスクの管理のための便利なサンプルが用意されています。
マスター サーバー
Master Server はゲームを一致させるのに便利です。 マスター サーバーに接続したいサーバーを起動する際、マスター サーバーはすべてのアクティブなサーバーのリストを提供します。
Master Server は、サーバーとクライアントの出会いの場で、ここではサーバーが宣伝され、互換性のあるクライアントは実行されているゲームに接続できます。 これにより、関与するすべてのパーティに対して IP アドレスを調整する必要がなくなります。 これは、ユーザーが、通常の状況下ではこれが必要とされるルーターに干渉せずに、ゲームのホストとなることもできます。 これは、クライアントがサーバーのファイアウォールをバイパスして、パブリック インターネットを通じて、通常はアクセス出来ないプライベートの IP アドレスに到達するのに役に立ちます。 これは、接続の確立を容易にするファシリテータにより行われます。
詳細については、Master Server page ページを参照してください。
帯域幅の最小化
最小の帯域幅を使用して、ゲームを正しく実行させることは重要です。 各種データ送信方法、何をまたはいつ送信するかを決める各種手法および好きな様に使用できるその他の手法があります。
使用される帯域幅を減らすためのヒントおよび手法については、Minimizing Bandwith page を参照してください。
ネットワーク ゲームのデバッグ
Unity には、ネットワーク ゲームのデバッグを支援するいくつかの機能が用意されています。
- Network Manager を使用して、すべての入出するネットワーク トラフィックをログできます。
- インスペクタおよび階層ビューを効率的に使用すれば、オブジェクト作成の追跡やビュー ID の検査などを行うことができます。
- Unity を同じマシン上で 2 回起動でき、それぞれで異なるプロジェクトを開くことができます。 Windows 上で、これは、別の Unity インスタンスを起動し、プロジェクト ウィザードからプロジェクトを開くだけで行うことができます。 Mac OS X では、 複数の Unity インスタンスを端末から開くことができ、 引数を指定できます。
/Applications/Unity/Unity.app/Contents/MacOS/Unity -projectPath "/Users/MyUser/MyProjectFolder/"
/Applications/Unity/Unity.app/Contents/MacOS/Unity -projectPath "/Users/MyUser/MyOtherProjectFolder/"
例えば、1 度に実行しているインスタンスが 2 つある場合、1 つにはフォーカスがあるため、ネットワーキングのデバッグ時に、バックグラウンドでプレイヤーに実行させます。 これにより、ネットワーキングのループが断たれ、望まない結果が生じます。 これは、エディタの Edit->Project Settings->Player か、Application.runInBackground で有効にできます。
Page last updated: 2012-11-13net-NetworkView
ネットワークビューはネットワーク上でデータを共有するための主要コンポーネントです。二種類のネットワーク通信があり、状態同期(State Synchronization)とリモートプロシージャコール(Remote Procedure Calls)と呼ばれます。
ネットワークビューは特定のオブジェクトの変更を監視します。これらの変更はネットワーク上の他の全てのクライアントに共有され、状態の変更が通知されていることを保証します。この概念が状態同期(state synchronization)と呼ばれ、詳細はState Synchronization page で確認ができます。
状態同期のオーバーヘッドの発生が望ましくない状況はいくつかあり、例えば、新しいオブジェクトの位置や再度生成されたプレイヤーの情報の送信がそれに当たります。このようなイベントは頻繁なものでないため、関連するオブジェクトの状態を同期する意味がありません。その代わり、リモートプロシージャコール(remote procedure call)を用いてクライアントやサーバに操作を実行するように指示することが出来ます。リモートプロシージャコールに関する詳細はRPC manual page を確認下さい。
技術的な詳細
ネットワークビューはNetworkViewIDでネットワーク上識別され、これはネットワークマシン上でユニークにした識別子です。128ビットの数字として表現されますが、ネットワークを経由する際、可能である場合は16ビットに自動圧縮されます。
クライアント側で受信したパケットはNetworkViewIDで指定された特定のネットワークビューに適用する必要があります。Unityはこの識別子を使って正しいネットワークビューを見つけることが出来、データを展開して、受信パケットの内容をネットワークビューの特定のオブジェクトに適用することができます。
Editorでのネットワークビューの詳細についてはNetwork View Component Reference page を参照下さい。
ネットワークオブジェクトを作成するためにNetwork.Instantiate() を使用する場合は、ネットワークビュー手動で適切に割り当てることを心配する必要がありません。バックグラウンドで自動的に処理されます。
しかし各ネットワークビューのNetworkViewIDの値はNetwork.AllocateViewID を使って手動でセットすることが出来ます。スクリプティングリファレンスのドキュメントでRPC関数を持つ全てのクライアントでオブジェクトをインスタンス化し、NetworkViewIDをAllocateViewIDで手動セットする例が示されています。
Page last updated: 2012-11-29net-RPCDetails
Remote Procedure Calls (RPC)はリモートマシンの関数呼び出しを可能にします。RPC実行は通常の関数の呼び出しと似て簡単ですが、理解すべき重要な違いがいくつかあります。
- RPCの引数の数は任に決められますが、使用するネットワーク帯域幅が引数の数と大きさにより増えます。最大のパフォーマンスを得る単に引数の数は最小に抑えるべきです。
- 通常の関数呼び出しと異なってRPCはリクエストの受け手を示す引数を指定します。一通り全てのケースをカバーするためいくつかRPC呼び出しモードが用意されてます。例えばRPCをすべての接続マシンで実行、サーバー単独で実行、コール送信元・あるいは任意のクライアントのみ実行しないなど指定できます。
RPC呼び出しは通常あるイベントをクライアント全てに送信するか二つのグループ間でイベント情報を渡しますが、工夫して好きなように活用できます。例えば4クライアント接続されたサーバーは4つめのクライアントが接続して初めてイベントを送信してゲーム開始出来ます。クライアントがアイテムを拾った時にクライアント全てにそのことをRPC呼び出しをして知らせることが出来ます。サーバーは特定RPCを特定クライアントのみに接続直後に送信し、プレイヤー番号、生成位置、チームカラーなどを渡せます。クライアントはそれと引き換えにスタート時のオプションを提示し、希望のカラーや購入したアイテムなどをサーバーだけにRPCを呼び出して送信できます。
RPCの使用
リモート実行する前に関数はRPCとしてマーキングする必要があります。このため関数の前にRPC属性を示す記号を入れます。
// 全てのRPCは@RPC属性が必要です
@RPC
function PrintText (text : String)
{
Debug.Log(text);
}
全てのネットワーク通信はNetworkViewコンポーネントで行うためRPC関数を宣言したスクリプトに呼び出し前にアタッチする必要があります。
引数
次の変数タイプをRPCの引数として使用出来ます:
- int
- float
- string
- NetworkPlayer
- NetworkViewID
- Vector3
- Quaternion
例えば、次のコードによりひとのstring引数を持つRPCを実行します:
networkView.RPC ("PrintText", RPCMode.All, "Hello world");
RPC()の最初の引数は実行する関数の名前で、2つめの引数は実行ターゲット対象を指定します。この場合、サーバ接続しているクライアント全てで実行します。(後ほど接続されるクライアント向けを待ってコールがバッファリングされることはありません、このバッファリングの詳細は以下を参照下さい)
2つ目の引数より後は、RPC関数に送信され、ネットワーク上を経由します。この場合では"Hello World"は引数としてPrintText関数のテキスト引数として渡されます。
さらに追加の内部引数として、RPCの呼び出し元など追加の情報を保持できるNetworkMessageInfo 構造体を参照できるようになります。この情報は自動的に送信されるため、PrintText関数は次のように宣言されます:
@RPC
function PrintText (text : String, info : NetworkMessageInfo)
{
Debug.Log(text + " from " + info.sender);
}
この際、関数実行の記述は変更の必要がありません。
上で述べたようにNetwork ViewをRPC関数を含むスクリプトを含むゲームオブジェクトにアタッチする必要があります。もしRPCを独立させて使用する場合(すなわち、状態同期なし)、Network ViewのState SynchronizationをOffにします。
RPCバッファ
RPC関数の呼び出しはバッファリングできます。バッファリングされたRPC関数の呼び出しは蓄積されたうえ、発行された順序で接続するクライアントごとに並べられます。これにより、遅れてきたプレイヤーがスタート時点から必要な情報が得られることを保証します。良くあるシナリオとしては、接続するプレイヤーが戸kつ英のレベルをロードするというシナリオです。このレベルの詳細を全ての接続されたプレーヤーに送信し、将来的に後から接続するプレイヤーのためにバッファリングします。これにより、新しいプレイヤーがあたかもスタートの最初から存在したかのごとくレベル情報を受信できます。
RPCバッファから必要に応じて呼び出しを除くことが出来ます。上の例で続けると、新しいプレイヤーが加わる前にゲームが次のレベルに進んでいる場合があり、この場合は元のバッファリングされたRPCを除いて、新しいレベルを要求するリクエストを送信することが出来ます。
Page last updated: 2012-11-29net-StateSynchronization
特定のNetwork Viewで状態同期を有効にするには、State SynchronizationのドロップダウンでReliable Delta CompressedかUnreliableを選択します。次に同期するデータの種類をObservedプロパティとして設定します。
UnityはTransform、Animation、RigidbodyおよびMonoBehaviourコンポーネントを同期できます。
Transforms は位置、回転、スケールを格納することによりシリアライズされます。親子情報はネットワークを通して渡されることはありません。
Animation は各アニメーション実行の状態をシリアライズします、すなわちtime、weight、speedおよび有効化されたプロパティです。
Rigidbody は位置、回転、速度および回転速度をシリアライズします。
スクリプト(MonoBehaviours)はOnSerializeNetworkView() 関数を呼び出します。
信頼性と帯域幅
Network Viewには2つの信頼性レベル、すなわちReliable Delta CompressedとUnreliable、があります。
どちらにも長所、短所があり、このゲームの詳細により何を使用するのがベストか決まります。
帯域幅を最小化するための追加情報については、Minimizing Bandwidth page を参照下さい。
Reliable Delta Compressed
Reliable Delta Compressedモードによりデータは最後にクライアントで受信したデータと現在の状態のデータを自動的に比較します。もし最後の更新からデータが変更されてない場合データは送信されません。しかしデータはプロパティごとに比較されます。例えば、もしTransformの位置が変更されているが回転は変更されてない場合、位置のみがネットワーク上送信されます。変更データのみ送信することで帯域幅は節約されます。
Unityは全ての送信されたパケットが受信されていることを、UDPパケットが受信されるまで再送信を繰り返すことにより、信頼性を保って送信します。言い換えると、パケットが受信できなかった場合、その後に送られるパケットは受信できなかったパケットが、再送信されて到着するまで、適用されないとうことです。それまでは、それ以降のパケットはバッファで待機します。
Unreliable
Unreliableモードにより、Unityはパケットが受信されたかどうかを確認することなく送信します。これによりどの情報が受信されたか判断できないため、変更データのみを送信することは安全ではなく、毎回の更新ごとに状態の全てが送信されます。
使用すべき手法の決定
ネットワークレイヤーではUDP、すなわち信頼性がなく順序がないプロトコルが使用されますが、TCPのように信頼性があり順序のあるパケットを送信することも出来ます。内部的にACKやNACKを使用してパケットの送信を制御しパケットが受信されることを保証します。信頼性があり順序のあるパケットを使用するデメリットはパケットが受信されないか遅延した場合、そのバケットが安全に到着するまで全てが停止されてしまうことです。これはタイムラグの大きいネットワークでは遅延が認識されることにつながります。
信頼性のない送信はデータがどうせ毎フレーム変更されることが分かっている場合に便利です。例えば、レーシングゲームで、プレイヤーの車は必ず動いているといえるので、パケットを逃したとしてもすぐに次のパケットで修正されます。
一般的には、「Unreliable」(非信頼)を、素早く頻繁な更新を行うことが重要であって、パケットを逃して問題ない場合に使用します。反対に、データがそれほど頻繁に変更されない場合は、「Reliable Delta Compressed」(信頼差分圧縮)を、帯域幅の節約のために使用します。
予測
データがゲーム世界の状態に対して完全な権限 がある場合、クライアントはサーバから受信する更新にもとづいてゲーム状態を変更します。これによるひとつの問題はサーバが反応するまで待つ遅延はゲームプレイに悪い影響を与えることです。例えば、プレイヤーがキーを押して前に進む場合、実際には更新された状態がサーバーから受信しないかぎり動きません。この遅延は接続の待ち時間に依存するため、最悪の場合接続が遅いほど、制御系も、きびきびとはしなくなります。
この問題に対する解決策のひとつはClient-side Prediction(クライアント側予測)であり、この意味はクライアントがサーバから渡される動作を、ほぼ同一モデルをしようすることによって予測することです。プレイヤーは直ちに入力に反応するもののサーバは最後の更新からの情報をみます。状態の更新がサーバから到着したとき、クライアントは必要な情報だけ認識します。予測の誤りは判明の都度修正され、連続的に修正されれば結果はよりスムーズとなりより気づかないものとなる。
推測航法 あるいは、内挿/外挿(Dead reckoning or interpolation/extrapolation)
クライアント側予測の基本原理をプレイヤーの相手に適用することもできます。Extrapolation(外挿)は相手の位置、速度、角度について最後に知っている複数の情報を格納して、近い将来にいる場所を予測すうrプロセスです。次の更新がようやく受信されたときに正しい位置が判明し、クライアントの状態は正確な情報で更新され、この場合に予測が悪かった場合はキャラクターの位置が飛んで見えてしまいます。FPSゲームにおいてプレイヤーの動作は一貫性がないことが多く、この種の予測がうまく行くことは難しい。遅延が十分に大きくなってしまうと予測のエラーが蓄積するにつれ相手の位置がひどく飛ぶようになります。
Interpolation(内挿)はパケットがクライアントへの送信後に受信されない場合に使用できます。通常、これはNPCの動作がポーズし、新しいパケットが受信されたときに新しい位置にジャンプすることにつながります。ゲームの世界の状態を一定の量だけ(例えば100ms)遅延させて、それから最後に知られた位置と新しい位置の間に内挿することで、このパケットが受信できなかったタイミングでの、二点間の動作はスムーズになります。
Page last updated: 2012-11-29net-NetworkInstantiate
Network.Instantiate 関数により、Object.Instantiate でひとつのクライアント上でするのと直感的に同じ方法で、プレハブを全てのクライアントでインスタンス化できます。インスタンス化するクライアントはオブジェクトをコントロールするもの(すなわちInputクラスはクライアントインスタンスのスクリプトからのみアクセス出来ます)ですが、変更はネットワーク上に反映されます。
Network.Instantiate()の引数リストは次の通りです:
static function Instantiate (prefab : Object, position : Vector3, rotation : Quaternion, group : int) : Object
Object.Instantiateと同様に、最初の3つの引数は、インスタンス化するプレハブと、希望する配置、回転です。group引数により、オブジェクトのサブグループを指定して、メッセージのフィルタリングを制御することが出来、ゼロを指定した場合はフィルタリング不要になります(下記Communication Groupsセクションを参照下さい)。
技術的な詳細
裏では、ネットワークのインスタンス化は、プレハブの識別子、位置やその他詳細情報を持つ、RPCコールから構成されています。このRPCコールは他のRPCコールと同様にバッファされ、インスタンス化されたオブジェクトは新しいクライアントが接続してきたときに表示されます。バッファリングの詳細についてはRPC を参照下さい。
コミュニケーショングループ(Communication Groups)
コミュニケーショングループは、特定のメッセージを受け取るクライアントを選択するのに使用します。例えば、2つの接続されたプレーヤーはゲーム世界の別々のエリアに存在するかも知れず遭遇することがないとします。この場合にゲーム状態を2つのプレイヤークライアントで受け渡す理由はありませんが、チャット通信だけ許可したいとします。この場合、ゲームプレイのオブジェクトのインスタンス化は制限する必要がありますが、チャット機能を実現するオブジェクトでは制限されないため、それらは別グループに入れる必要があります。
Page last updated: 2012-11-29net-NetworkLevelLoad
以下でマルチプレイヤーゲームでレベルロードする簡単なサンプルを示します。レベルロードの途中でネットワークメッセージの処理が行われないことをチェックしてます。さらにすべて準備が整った後でないかぎりメッセージ送信が行われないこともあわせてチェックしてます。最後にレベルロード完了時に全てのスクリプトにメッセージ送信を行いレベルロードが終わってなんらかの処理を受け付けできることを示します。SetLevelPrefix関数により新しくロードしたレベルに不要なネットワーク更新が行われないようにします。不要な更新とは例えば前のレベルでの更新です。このサンプルではグループを使用してゲームデータとグループへのレベルロード通信を分離します。グループ0はゲームデータのトラふぃくで使用されグループ1はレベルロードに使用します。レベルロードの途中ではグループ0はブロックされますがグループ1は開放されていて、レベルロードの際にチャット通信も別に行うことが出来るよう開放されてます。
var supportedNetworkLevels : String[] = [ "mylevel" ];
var disconnectedLevel : String = "loader";
private var lastLevelPrefix = 0;
function Awake ()
{
// Network level loading is done in a separate channel.
DontDestroyOnLoad(this);
networkView.group = 1;
Application.LoadLevel(disconnectedLevel);
}
function OnGUI ()
{
if (Network.peerType != NetworkPeerType.Disconnected)
{
GUILayout.BeginArea(Rect(0, Screen.height - 30, Screen.width, 30));
GUILayout.BeginHorizontal();
for (var level in supportedNetworkLevels)
{
if (GUILayout.Button(level))
{
Network.RemoveRPCsInGroup(0);
Network.RemoveRPCsInGroup(1);
networkView.RPC( "LoadLevel", RPCMode.AllBuffered, level, lastLevelPrefix + 1);
}
}
GUILayout.FlexibleSpace();
GUILayout.EndHorizontal();
GUILayout.EndArea();
}
}
@RPC
function LoadLevel (level : String, levelPrefix : int)
{
lastLevelPrefix = levelPrefix;
// これからレベルロードを行うところでありオブジェクトはいずれにせよ削除されるため、
// ネットワーク上のデフォルトチャネルでこれ以上データを送信する理由はありません
Network.SetSendingEnabled(0, false);
// 最初のレベルロードが完了する必要があるため受信も停止する必要があります。
// レベルがロードした後は、RPC更新やその他のオブジェクトにアタッチされた状態更新は実行可能になります
Network.isMessageQueueRunning = false;
// あるレベルでロードされた全てのネットワークビューはNetworkViewIDの頭文字が付与されます。
// これによりクライアントから古い更新情報が新しいシーンに流入することを防止します。
Network.SetLevelPrefix(levelPrefix);
Application.LoadLevel(level);
yield;
yield;
// 再度データ受信を許可する
Network.isMessageQueueRunning = true;
// この時点でレベルはロードされ、クライアントにデータ送信することが出来ます。
Network.SetSendingEnabled(0, true);
for (var go in FindObjectsOfType(GameObject))
go.SendMessage("OnNetworkLoadedLevel", SendMessageOptions.DontRequireReceiver);
}
function OnDisconnectedFromServer ()
{
Application.LoadLevel(disconnectedLevel);
}
@script RequireComponent(NetworkView)
Page last updated: 2012-11-28
net-MasterServer
マスターサーバ(Master Server)はプレイヤークライアントがゲームインスタンスに接続するための出会いの場所です。ポート番号やIPアドレスは隠蔽され、ネットワーク接続を確立する際は、ファイアウォールのハンドリングやNATパンチスルー(NAT punchthrough)など、技術的なタスクを実行します。
各々の実行中のゲームインスタンスはマスターサーバにGame Typeを提供します。プレイヤーがマスターサーバに接続すると、Game Typeが一致するかクエリー実行して、サーバは実行中のゲームの一覧とそれぞれのプレイヤー数、またプレイするためにパスワードが必要であるかを応答します。このデータ交換を行うための関数はサーバ向けのMasterServer.RegisterHost() とプレイヤークライアント向けのMasterServer.RequestHostList() です。
RegisterHostをコールするとき3つの引数が必要で、gameTypeName(さきほどのGame Type)、gameName、commentをもとにホスト登録を行います。RequestHostListは接続したいホストgameTypeNameを引数にとります。そのGame Typeで登録されているホストがリクエストクライアントに戻されます。これは非同期処理であり、完全に受信した後にPollHostList()により完全なリストを受信することが出来ます。
マスターサーバによるNATプルスルーはFacilitatorという別処理によりハンドリングされますが、Unityのマスターサーバは両方のサービスを並列処理します。
Game Typeは各々のゲームを一意に識別する名前であるべきです(ただしUnityはこれを保証する中央登録システムはもってません)。他で使用されることのない独自の名前を選択します。もしゲームに異なるバージョンがある場合、ユーザに警告を行いクライアントが実行サーバとバージョンが対応してないことを知らせます。バージョン情報はcommentフィールドで渡します。(バイナリデータであるためバージョンは好みのフォーマットで渡せます)。ゲームの名前は単にセットアップした人が提供した特定のゲームインスタンスの名前です。
commentフィールドはマスターサーバを適切に設定すればより高度な方法で使用することが出来ます。(この詳細については以下 を参照下さい。)例えば、commentの最初の10バイトをパスワードにしてマスターサーバでホスト更新が行われたときにホスト更新を受信したときにマスターサーバ上でパスワードを展開することができます。もしパスワードチェックに失敗した場合、ホスト更新を拒否することが出来ます。
ゲーム登録
ゲーム登録の前にホストでサポートされているかによってNAT機能を有効・無効にすることが大事です。これはNetwork.InitializeServer のuseNatの引数で行うことが出来ます。
サーバは次のようなコードで始まるかもしれません:
function OnGUI() {
if (GUILayout.Button ("サーバを開始します"))
{
// パブリックIPがない場合はNATパンチスルーを使用します
Network.InitializeServer(32, 25002, !Network.HavePublicAddress());
MasterServer.RegisterHost("MyUniqueGameType", "JohnDoes game", "l33t game for all");
}
}
ここではNATパンチスルーが必要であるかを、マシンにパブリックIPアドレスがあるかどうかをチェックして判断します。 Network.TestConnection というより洗練された関数によりホストマシンがNATを出来るかどうか判断させることも出来ます。さらにパブリックIPアドレスに対して接続テストを行いゲームポートに対してファイアウォールがブロックしているかチェックします。パブリックIPアドレスのあるマシンはつねにNATテストを通過しますが、接続テストに失敗した場合にホストはNATクライアントに接続できません。そのようなケースでは、ゲームを動かすにはポートフォワーディングが必要であることがユーザに知らされます。ローカルのブロードバンド接続は通常NATアドレスがありますが、ポートフォワーディングをセットアップすることが出来ません(パブリックIPアドレスを持たないため)。これらのケースでは、NATテストが失敗した場合、ローカルネットワークのクライアントしか接続することが出来ないためサーバを実行することが推奨されないことが、ユーザに知らされます。
もしホストがNAT機能を必要でないのに有効化した場合もアクセス可能となります。しかし、NATパンチスルーできないクライアントはサーバがNATを有効化していないために接続できないと誤解する可能性があります。
ゲームへの接続
HostDataオブジェクトはホスト登録やクエリーの際に送信されます。次のホストに関する情報を保持しています:
| boolean | useNat | ホストがNATパンチスルーを使用するか示します。 |
| String | gameType | ホストのゲームタイプ |
| String | gameName | ホストのゲーム名 |
| int | connectedPlayers | プレイヤー/クライアントの同時接続数 |
| int | playerLimit | プレイヤー/クライアントの最大同時接続数 |
| String[] | IP | ホストの内部IPアドレス。パブリックIPアドレスをもつサーバにおいては内部、外部のIPアドレスは同じです。内部接続する場合に、全てのIPアドレスとマシンの全ての印tフェースを紐付けてチェックする必要があるため、このフィールドは配列として定義されています。 |
| int | port | ホストのポート |
| boolean | passwordProtected | ホスト接続にパスワードが必要であるか |
| String | comment | ホスト登録の際にセットされたコメント |
| String | guid | ホストのネットワークGUID。NATパンチスルーで接続するために必要 |
この情報はクライアントにより、ホストが接続できるかどうかみるために使用されます。NATが有効化された場合、接続する際にホストのGUIDが必要です。これは接続する際にHostDataが受信されたときに自動的にハンドリングされます。接続のルーチンは次のようなものになります:
function Awake() {
MasterServer.RequestHostList("MadBubbleSmashGame");
}
function OnGUI() {
var data : HostData[] = MasterServer.PollHostList();
// ホスト一覧のホストを全てチェックする
for (var element in data)
{
GUILayout.BeginHorizontal();
var name = element.gameName + " " + element.connectedPlayers + " / " + element.playerLimit;
GUILayout.Label(name);
GUILayout.Space(5);
var hostInfo;
hostInfo = "[";
for (var host in element.ip)
hostInfo = hostInfo + host + ":" + element.port + " ";
hostInfo = hostInfo + "]";
GUILayout.Label(hostInfo);
GUILayout.Space(5);
GUILayout.Label(element.comment);
GUILayout.Space(5);
GUILayout.FlexibleSpace();
if (GUILayout.Button("Connect"))
{
// HostData構造体に接続し、内部的に正しい接続方法が使用されます(NATの場合はGUID)。
Network.Connect(element);
}
GUILayout.EndHorizontal();
}
}
このサンプルではマスターサーバから戻される関連する全てのホスト情報が出力されます。ping情報やホストの地理情報のように便利な情報もこれに加えることが出来ます。
NATパンチスルー
NATパンチスルーが可能であるかにより、特定のコンピュータがサーバとして適切であるか示します。いくつかのクライアントは接続できる可能性があるものの、NATサーバへの接続で問題が発生するクライアントが出る可能性があります。
デフォルトでは、NATパンチスルーはマスターサーバの協力により提供されますが、必ずしもこの方法でなくとも問題ありません。NATパンチスルーのサービスに使用されるファシリテータ(Facilitator)プロセスです。もし二つのマインがファシリテータで接続されている場合、外部IPアドレスとポートが使用されるかぎり、両方とも互いに接続できるように見えます。マスターサーバは、他の方法では判断することが難しい、この外部IPとポート情報を提供するために使用されます。マスターサーバとファシリテータが密接に統合されているのはこのためです。マスターサーバはデフォルトで同一のIPアドレスであり、いずれかを変更するには、 MasterServer.ipAddress 、MasterServer.port 、Network.natFacilitatorIP およびNetwork.natFacilitatorPort のいずれかを使用します。
応用
Unity Technologiesではテスト目的で、完全にデプロイされたマスターバーがあり、このサーバが実際にデフォルトで使用されます。しかしソースコードは無償で提供されておりWindows、LinuxおよびMac OSで使用可能です。プロジェクトをソースから構築する方法に加えて、マスターサーバが情報のハンドリングする方法や、通信方法を修正したいケースがあるかもしれません。例えば、ホストデータのハンドリングを最適化したり、ホスト一覧に戻されるクライアントの数を限定することが出来ます。このような変更はソースコードの変更を伴い、この方法の詳細についてはMaster Server Build page を参照下さい。
Page last updated: 2012-11-28net-MasterServerBuild
個別のネットワーキングサーバのソースコードはUnity website にあります。Connection tester、Facilitator、マスターサーバ、プロキシサーバも含まれます。
全てのソースパッケージはRakNet 3.732ネットワーキングパッケージを含み、これが基本的なネットワーキング関数をハンドリングし、ネットワーキングサーバが使用するプラグインを提供します。
パッケージには3種類の異なるプロジェクトファイルが含まれていて、コンパイルできます。
- Mac OS X Xcode 3.0向けプロジェクト
- LinuxおよびMac OS X向けMakefile
- Visual Studio 2008ソリューション
XcodeおよびVisual Studioプロジェクトは簡単に開いてコンパイルしビルドできます。Makefileをビルドするにはmakeを実行して下さい。LinuxとMac OS Xの標準コンパイルセットアップとしてgccがある場合は動作します。あるいはLinuxではncursesライブラリが必要となる場合があります。
全体構成
マスターサーバ
マスターサーバは内部的なデータベース構造を保持しホスト情報をトラッキングします。)
ホストはRUM_UPDATE_OR_ADD_ROWメッセージ識別子および埋め込まれたホスト情報でメッセージ送信を行ます。LightweightDatabaseServer.cppファイルのOnReceive()関数でこれは処理されます。ここで全てのメッセージは最初に表示されるため、メッセージがどう処理されたのかトレースしたい場合はこの場所から始めると良いです。MasterServer.RegisterHost をセットする各々game typeごとデータベース構造のテーブルが作成されます。全てのゲームタイプ(game type)はひとつのテーブルにグループ分けされ、テーブルが存在しない場合CreateDefaultTable()関数により動的に生成されます。
ホスト情報のデータはマスターサーバで修正されます。マスターサーバからみた、登録されるゲームのIPアドレスおよびポート情報は、ホストデータに投入されます。この方法により外部IPアドレスおよびポート情報がホストにプライベートアドレス(NATアドレス)を保有する場合も正しいか診断できます。ゲームサーバにより送信されるホストデータのIPアドレスおよびポート情報は、プライベートIPアドレスおよびポート情報であり、後ほど使用するために蓄積されます。もしマスターサーバでクライアントがゲームサーバのホストデータを要求し、サーバが同じIPアドレスを保有する場合、次に外部IPアドレスではなくプライベートアドレスを使用します。これによりクライアントとサーバが同じローカルネットワーク上にあり、同じルータをNATアドレスを使用する場合に処理を行うためです。同じIPアドレスを保有し、それを通して接続することは出来ないため、プライベートアドレスを通して接続する必要があり、この場合に正しく動作します。
クライアントはID_DATABASE_QUERY_REQUESTメッセージ識別子および探しているゲームタイプによりメッセージ送信を行います。テーブルあるいはホスト一覧がデータベースからフェッチされクライアントに送信されます。見つからなかった場合、空のホスト一覧が返されます。
マスターサーバに送信される全てのメッセージはCheckVersion()によりチェックされたバージョン情報を保有する必要があります。現時点ではUnityの各バージョンは内部的に新しいマスターサーバのバージョンをセットされ、このときにチェックされます。もしマスターサーバの通信ルーチンがどこかのタイミングで変更となった場合に、古いバージョンをこのときに検知することが出来て、場合によってマスターサーバの別バージョンを参照(必要があれば)するか、違う場合に対応してメッセージ処理を修正することが出来ます。
Facilitator
FacilitatorはRakNetのNATパンチスルー プラグインを修正することなく用いています。This is essentially just a peer listening on a port with the NAT punchthrough plugin loaded. 本質的には、ロードされたNATパンチスルー プラグインを用いてリスニングを行うピアとして働きます。NATアドレスをもったサーバととクライアントがともにこのピアと接続した場合、NATパンチスルーを実行して互いに接続することが出来ます。Network.InitializeServer がNATを使用するとき、自動的に接続が行われます。
Page last updated: 2012-11-30net-MinimizingBandwidth
ゲームの他の部分と比較するとネットワーク通信は遅いため、最小に抑えることが重要です。このため、データ交換を行っている量や発生の頻度を考慮することがとても重要です。
データが同期される方法
使用されるネットワーク帯域幅はUnreliable(非信頼)かReliable Delta Compression(信頼差分圧縮)、どちらのモードを使用してデータの同期を行っているかに大きく依存します。(モードはNetwork Viewコンポーネントからセットします)
Unreliable(非信頼)モードでは、同期されるオブジェクトの全体がネットワーク更新ループの全ての反復ごとに送信されます。この更新頻度は Network.sendRate の値により決定され、デフォルトでは1秒ごとに15更新にセットされてます。Unreliableモードは頻繁な更新を保証する一方でパケットの逸失や遅延は単に無視されます。頻繁に更新が行われ、更新を逃しても影響する時間的長さが短いオブジェクトにとっては最適の同期方法です。しかし更新ごとに発生するデータ量は頭に入れておく必要があります。例えばTransformの同期は9つの浮動小数点データ(3つのVector3に各々3つの浮動小数点)を伴い、更新ごとに36バイトを意味します。もしサーバが8クライアントで実行していてデフォルトの更新頻度を使用している場合4,320KBytes/s(8×36×15)あるいは34.6Kbit/sを受信し、30.2KBytes/s(8×7×36×15)あるいは242Kbits/sを送信します。帯域幅の消費量を大きく減らすために更新頻度を下げることは出来ますが、アクションが素早いゲームにおいて15というデフォルト値は大体適切なものです。
Reliable Delta Compressedモードでデータは信頼性をもって正しい順番で受信されることが保証されてます。もしパケットを逸失した場合再送信され、順番が正しくない場合、全てのパケットが一連のパケットが全て到着するまでバッファリングされます。これにより送信されたデータが正しく受信されることを保証するものの、この待機および再送信が帯域幅を消費しがちです。しかしデータは差分圧縮され、前回の状態と現在の状態の差分のみが送信されます。もし状態がまったく同一である場合何も送信されません。差分圧縮はどれだけどのプロパティが変化するかに依存します。
どのデータが同期されるか
全てのクライアントで実際に同じでない状態を同じであるかのようにゲーム設計を工夫する余地があります。具体例としてはアニメーションを動機する例です。もしアニメーションコンポーネントがNetwork Viewにより発見されると、プロパティは正確に同期され、すべてのクライアントでフレームが同期するようになります。これが望ましいケースもありますが、通常はキャラクターが歩いたり、走ったり、ジャンプしていたり、のどれかで十分です。アニメーションの同期を削るするためにはどのアニメーション シーケンスを再生するかintegerの値を送信して指定するのみです。これによりアニメーション全体を送信するよりネットワーク帯域幅が節約できます。
いつデータの同期をするか
通常、ゲームをすべてのクライアントで完全に同期することは不要で、たとえば、プレイヤーがゲーム世界の別エリアに一時的にいて互いに遭遇することのないケースがあったとします。この場合帯域幅だけでなくサーバの負荷を削減するには、遭遇することが出来るクライアントのみ同期します。このコンセプトはRelevant Sets(関連セット)と呼ばれます(すなわち、特定のクライアントのある特定の時間で、ゲーム全体のうち実際に関連するサブセットが一部ある状態)。クライアント同期をRelevant Setにもとづいて同期させることはRPCによって、同期の更新対象をより強力にコントロールしたうえで、ハンドリングすることが出来ます。
レベルローディング
レベルをロードするときは、通常各クライアントが使用する帯域幅を気にする必要があることは稀で、全てのクライアントがプレイするレベルを初期化できるまで単に待たせれば良いです。レベルローディングはかなり大きなデータアイテムの送信を伴うことがあります(画像や音声データなど)。
Page last updated: 2012-11-30net-SocialAPI
ソーシャルAPIはUnityのソーシャル機能をアクセスするための接点であり:
- ユーザプロファイル
- フレンドリスト
- ゲーム実績(Achievements)
- 統計/ランキング(Statistics / Leaderboards)
これによりXBox LiveあるいはGameCenterへの統一的なインタフェースが提供され、ゲームプロジェクトのプログラマにより主に使用されます。
ソーシャルAPIは主に非同期APIであり、標準的な使い方は関数呼び出しを行い関数の完了後にコールバックを登録します。非同期の関数は副作用があり、APIで特定の状態変数に値を生成したり、コールバックはサーバで処理すべきデータを含む可能性があります。
ソーシャルクラスはUnityEngine名前空間にあるため、常に利用可能ですが、ソーシャルAPIクラスは別の名前空間であるUnityEngine.SocialPlatformsにあります。さらにソーシャルAPIの実装はSocialPlatforms.GameCenterなど名前空間の下位の階層にあります。
次がソーシャルAPIの使用例です(JavaScript):
import UnityEngine.SocialPlatforms;
function Start () {
// 認証を行い、ProcessAuthenticationコールバックを登録
// 他のソーシャルAPIの呼び出しを行う前にこの呼び出しを行う必要があります。
Social.localUser.Authenticate (ProcessAuthentication);
}
// この関数は認証が完了した後に呼び出しされます
// 処理が正常終了した場合、Social.localUserにはサーバから取得したデータを含むことに留意して下さい。
function ProcessAuthentication (success: boolean) {
if (success) {
Debug.Log ("認証完了、Achievementsをチェックしています");
// ロードされているAchievementsをリクエストし、それを処理するためのコールバックを登録します。
Social.LoadAchievements (ProcessLoadedAchievements);
}
else
Debug.Log ("認証失敗");
}
// LoadAchievement呼び出しが完了するとこの関数が呼び出しされます。
function ProcessLoadedAchievements (achievements: IAchievement[]) {
if (achievements.Length == 0)
Debug.Log ("エラー:Achievementsは見つかりませんでした。");
else
Debug.Log ("取得:" + achievements.Length + " achievements");
// また、次のように関数を中で呼び出しすることも出来ます
Social.ReportProgress ("Achievement01", 100.0, function(result) {
if (result)
Debug.Log ("正常にをAchievementの進捗をレポート");
else
Debug.Log ("Achievementのレポートに失敗");
});
}
同じサンプルでC#を使用した例です。
using UnityEngine;
using UnityEngine.SocialPlatforms;
public class SocialExample : MonoBehaviour {
void Start () {
// 認証を行い、ProcessAuthenticationコールバックを登録
// 他のソーシャルAPIの呼び出しを行う前にこの呼び出しを行う必要があります。
Social.localUser.Authenticate (ProcessAuthentication);
}
// この関数は認証が完了した後に呼び出しされます
// 処理が正常終了した場合、Social.localUserにはサーバから取得したデータを含むことに留意して下さい。
void ProcessAuthentication (bool success) {
if (success) {
Debug.Log ("認証完了、Achievementsをチェックしています");
// ロードされているAchievementsをリクエストし、それを処理するためのコールバックを登録します。
Social.LoadAchievements (ProcessLoadedAchievements);
}
else
Debug.Log ("認証失敗");
}
// LoadAchievement呼び出しが完了するとこの関数が呼び出しされます。
void ProcessLoadedAchievements (IAchievement[] achievements) {
if (achievements.Length == 0)
Debug.Log ("エラー:Achievementsは見つかりませんでした。");
else
Debug.Log ("取得:" + achievements.Length + " achievements");
// また、次のように関数を中で呼び出しすることも出来ます
Social.ReportProgress ("Achievement01", 100.0, result => {
if (result)
Debug.Log ("正常にをAchievementの進捗をレポート");
else
Debug.Log ("認証失敗");
});
}
}
ソーシャルAPIの詳細についてはSocial API Scripting Reference を参照ください。
Page last updated: 2012-11-30Built-in Shader Guide
Unity内蔵に関する最良の情報を見つけるにはここが最適です。Unityには40以上の内蔵シェーダがあり、当然ながら自ら記述することも出来ます。このガイドは内臓シェーダの群(Family)について説明したうえで個別のシェーダについて詳細に入ります。このガイドにより、Unityシェーダを、実現したいエフェクトが得るため、最大に活用できるようになります。
シェーダの使用
UnityシェーダはMaterialsを通して使用され、本質的にシェーダのコードをテクスチャのようなパラメータと合成します。シェーダとマテリアルの関係性の詳細についてはここ で詳細を確認できます。
マテリアルのプロパティは、マテリアルが選択されたときかマテリアルを使用するゲームオブジェクトが選択されたときに、Inspectorで表示されます。マテリアルInspectorは次のように表示されます:

各マテリアルはインスペクタで使用するシェーダによって少し異なって見えます。シェーダ自体によりインスペクタで調整できるパラメータの種類が決まります。マテリアルInspectorはMaterial reference page で詳細に説明があります。シェーダがマテリアルを通して実装されることを覚える必要があります。シェーダがインスペクタで表示されるプロパティを定義し、各マテリアルはスライダー、カラーやテクスチャの調整されたデータを実際に含みます。覚えておくべき重要なこととして、ひとつのシェーダを複数のマテリアルに使用できますが、ひとつのマテリアルに複数のシェーダを使用することは出来ません。
Unity内蔵のシェーダ
- Unityシェーダのパフォーマンス
- Normal Shader Family
- 透明シェーダ ファミリ
- Transparent Cutout Shader Family
- Self-Illuminated Shader Family
- Reflective Shader Family
shader-Performance
ゲームの全体的なパフォーマンスに影響する要素はたくさんあります。このページはBuilt-in Shaders に絞ってパフォーマンスの考慮事項を説明します。シェーダのパフォーマンスは主に2つのことに大きく影響を受けます:シェーダ自体と、プロジェクトあるいは特定のカメラにより使用されるRendering Path です。自分でシェーダを記述する際のパフォーマンスの秘訣についてはShaderLab Shader Performance のページを参照下さい。
レンダリングパスとシェーダパフォーマンス
Unityでサポートされているレンダリングパスのうち、Deferred Lighting と Vertex Lit もっともパフォーマンスを予見可能です。Deferred Lightingでは各オブジェクトは一般に どのようなライトに影響を受けるかに関わらず2回描画されます。同様にVertex Litで各オブジェクトは1回だけ描画されます。すなわちシェーダのパフォーマンスの違いは使用してるテクスチャの数とどのような計算を行うかに依存します。
Forwardレンダリングパスのシェーダパフォーマンス
Forward レンダリングパスではシェーダのパフォーマンスはシェーダ自体とシーンのライト両方に影響を受けます。次のセクションは詳細について説明します。パフォーマンス観点から2つの基本的なカテゴリがあり、Vertex-LitとPixel-Litです。
ForwardレンダリングパスのVertex-Litシェーダは常にPixel-Litシェーダより安価です。これらのシェーダはメッシュ頂点にもとづいてライティングを計算します。このため、オブジェクトにいくつライトが照らされていても、描画するのは1回のみとなります。
Pixel-Litシェーダは描かれたピクセルごとに最終的なライティングを計算します。このため、オブジェクトはAmbient(環境光)およびDirectional light(指向性照明)を得るために1回描画され、追加のライティングごとに1回する必要があります。計算式はN回のレンダリングパスであり、Nはオブジェクトに最終的に照らされるpixel lightの数です。これによりCPUの処理、グラフィックカードへの命令発信、頂点およびピクセルを描画するグラフィックスカード、の負荷が増加します。スクリーンに表示されるPixel-litオブジェクトの大きさもまた描画速度に影響します。オブジェクトが大きいほど描画に時間がかかります。
このためpixel litシェーダはパフォーマンスコストが大きいが、そのコストにより素晴らしいエフェクトが得られます:代表的な名前だけ挙げても影、法線マッピング、良い見栄えのspecular highlightやlight cookieなどあります。
ライトを強制的に、pixel("重要")あるいはvertex/SH("さして重要でない")のモードに、切り替えることが出来ることを覚える必要があります。Pixel-Litシェーダに照らされるvertex lightsはオブジェクトの頂点または全体で計算され、レンダリングコストあるいはlightに伴う映像エフェクトには加えられません。
一般的なシェーダのパフォーマンス
内蔵シェーダ について、おおよそ次の順序に従って複雑さが増加します:
- Unlit. テクスチャのみ、ライティングによる影響をうけません。
- VertexLit.
- Diffuse.
- Normal mapped. Diffuse(拡散)よりわずかに高価:テクスチャ(法線マップ)をひとつ、シェーダの指示を複数追加します。
- Specular. Specular highlight(鏡面ハイライト)の計算を追加します。
- Normal Mapped Specular. Specularよりわずかに高価。
- Parallax Normal mapped. Parallax法線マッピング計算を追加します。
- Parallax Normal Mapped Specular. Parallax法線マッピングとSpecular highlight(鏡面ハイライト)計算を追加します。
さらに、Unityはモバイルプラットフォーム向けのシンプルなシェーダをいくつか"Mobile"カテゴリーの下に用意しています。これらシェーダは他のプラットフォームでも良く動作するため、このシンプルさ(例えば、approximate specular, no per-material color support (ほぼ鏡面、マテリアルごとのカラーサポートなし)等)を活用するため、是非とも試してみて下さい。
Page last updated: 2012-11-30shader-NormalFamily
これらのシェーダはUnityの基本的なシェーダです。これらは特殊な側面は一切なく、ほとんどの不透明オブジェクトでの使用に適しています。オブジェクトがTransparent(透明)、Emitting Light(発光)の場合は適していません。
Vertex Lit
必要アセット:
- Baseテクスチャ一つ、アルファチャネル必要なし
Diffuse
必要アセット:
- Baseテクスチャ一つ、アルファチャネル必要なし
Specular
必要アセット:
- Baseテクスチャ一つ、Specular Map(鏡面反射マップ)向けのアルファチャネルあり
Normal mapped
必要アセット:
- Baseテクスチャ一つ、アルファチャネル必要なし
- Normal map(法線マップ)一つ
Normal mapped Specular
必要アセット:
- Baseテクスチャ一つ、Specular Map(鏡面反射マップ)向けのアルファチャネルあり
- Normal map(法線マップ)一つ
Parallax
必要アセット:
- Baseテクスチャ一つ、アルファチャネル必要なし
- Normal map(法線マップ)一つ
- Heightテクスチャ一つ、Parallax Depth(パララックスデプス)向けのアルファチャンネルあり
Parallax Specular
必要アセット:
- Baseテクスチャ一つ、Specular Map(鏡面反射マップ)向けのアルファチャネルあり
- Normal map(法線マップ)一つ
- Heightテクスチャ一つ、Parallax Depth(パララックスデプス)向けのアルファチャンネルあり
Decal
必要アセット:
- Baseテクスチャ一つ、アルファチャネル必要なし
- Decalテクスチャ一つ、Decal(デカール)透過向けのアルファチャネルあり
Diffuse Detail
必要アセット:
- Baseテクスチャ一つ、アルファチャネル必要なし
- Detailグレースケールテクスチャ一つ、50%グレーの中間色あり
shader-NormalVertexLit

Vertex-Litプロパティ
このシェーダはVertex-Lit(頂点ライト)、でありもっともシンプルなシェーダのひとつです。これに照らされるライトはひとつのパスでレンダリングされ頂点のみで計算されます。
Vertex-Lit(頂点ライト)であるため、light cookie、法線マッピング、シャドウ(影)といった、ピクセルベースのレンダリングエフェクトは描画されません。このシェーダはモデルのテッセレーションに敏感です。もしこのシェーダを使ったキューブのきわめて近くにポイントライトを置いた場合、ライトは頂点のみで計算されます。Pixel-litシェーダは、テッセレーションと独立して、丸いハイライトを作成するのに効果を発揮します。望む効果がそれであれば、むしろPixel-litシェーダを使用するかオブジェクトのテッセレーションを増やすことを考慮すべきです。
パフォーマンス
一般に、このシェーダでは少ないレンダリングコストで描画できます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-10shader-NormalDiffuse

Diffuse(拡散) プロパティ
Diffuse(拡散)は簡単な (Lambertian(ランバート))ライティングモデルを計算します。表面のライティングは光源との角度が小さくなるにつれ弱まります。ライティングはこの角度のみに依存し、カメラの移動・回転による影響を受けません。
パフォーマンス
一般に、このシェーダでは少ないコストでレンダリングできます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-20shader-NormalSpecular

Specular(鏡面)プロパティ
Specular(鏡面)はDiffuse(拡散)と同様のシンプル(Lambertian)ライティングを使用するのに加えてビューア依存の鏡面ハイライトを計算します。Blinn-Phongライティングモデルと呼ばれます。鏡面のハイライトは、表面の角度、ライトの角度、およびビューアングル、に依存します。ハイライトは実際にはリアルタイム表現向きの、光源からブラーのかかった反射のシミュレーションです。ハイライトのブラーの度合いはInspectorのShininessスライダで制御されます。
これに加えて、メインのテクスチャのアルファチャネルは鏡面マップ(時々"Gloss Map"とも呼ばれます)として動作し、オブジェクトのどの領域が他の部分より反射するか定義します。アルファの黒い部分は鏡面反射がゼロとなり、白い領域は完全な鏡面反射となります。これはオブジェクトの異なるエリアで鏡面の反射レベルを変更したい場合に便利です。例えば、錆びた金属などは低い鏡面性を使用し、磨かれた金属は高い鏡面性を使用します。口紅は肌よりも鏡面性を高く、肌は綿の服よりも鏡面性を高くします。良く出来た鏡面マップはプレイヤーを関心させるのに大きな違いを生みます。
パフォーマンス
一般に、このシェーダでは安価でレンダリングできます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-30shader-NormalBumpedDiffuse

ノーマルマップ プロパティ
Diffuse(拡散)シェーダと同様のシンプル(Lambertian)ライティングを使用します。表面と光がなす角度が小さくなるほどライティングが弱くなります。ライティングは角度のみに依存し、カメラの移動や回転による変更の影響は受けません。
Normal mapping(ノーマルマップ)は細かい表面のディテールを、ポリゴンの量を増やして詳細を削り出すのではなく、テクスチャにより表現します。オブジェクトの形状は変更しないが、Normal Map(ノーマルマップ)と呼ばれる特殊なテクスチャを使用してこのエフェクトを得ます。ノーマルマップでは、各ピクセルのカラー値は表面法線の角度を表します。次にこの値を形状の代りに使用してライティングを行います。オブジェクトのライティングを計算するにあたり、効果的にノーマルマップによってメッシュの形状の影響をオーバーライドします。
ノーマルマップの作成
通常のグレースケールの画像をインポートし、Unityの中でノーマルマップへと変換できます。これを行う方法については[[HOWTO-bumpmap | Normal map FAQ page] を参照下さい。
技術的な詳細
ノーマルマップは接線空間(Tangent Space)を用いたノーマルマップです。接線空間とはモデル形状の"表面に沿った"空間です。この空間において、Z軸は必ず表面から離れる方向を指します。接線空間によるノーマルマップは他の"オブジェクト空間"を用いたノーマルマップと比較すると効果ですが、いくつかの長所があります:
- It's possible to use them on deforming models - the bumps will remain on the deforming surface and will just work.Deforming model(変形モデル)で使用出来ます、Deforming surface(変形平面)のバンプは維持され、正しく機能します。
- モデルの異なる部分にノーマルマップのパーツを再利用することが出来ます、あるいは別のモデルに使用できます。
Diffuse(拡散) プロパティ
Diffuse(拡散)は簡単な (Lambertian(ランバート))ライティングモデルを計算します。表面のライティングは光源との角度が小さくなるにつれ弱まります。ライティングはこの角度のみに依存し、カメラの移動・回転による影響を受けません。
パフォーマンス
一般に、このシェーダでは安価にレンダリングできます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-30shader-NormalBumpedSpecular

ノーマルマップ プロパティ
Diffuse(拡散)シェーダと同様のシンプル(Lambertian)ライティングを使用します。表面と光がなす角度が小さくなるほどライティングが弱くなります。ライティングは角度のみに依存し、カメラの移動や回転による変更の影響は受けません。
Normal mapping(ノーマルマップ)は細かい表面のディテールを、ポリゴンの量を増やして詳細を削り出すのではなく、テクスチャにより表現します。オブジェクトの形状は変更しないが、Normal Map(ノーマルマップ)と呼ばれる特殊なテクスチャを使用してこのエフェクトを得ます。ノーマルマップでは、各ピクセルのカラー値は表面法線の角度を表します。次にこの値を形状の代りに使用してライティングを行います。オブジェクトのライティングを計算するにあたり、効果的にノーマルマップによってメッシュの形状の影響をオーバーライドします。
ノーマルマップの作成
通常のグレースケールの画像をインポートし、Unityの中でノーマルマップへと変換できます。これを行う方法については[[HOWTO-bumpmap | Normal map FAQ page] を参照下さい。
技術的な詳細
ノーマルマップは接線空間(Tangent Space)を用いたノーマルマップです。接線空間とはモデル形状の"表面に沿った"空間です。この空間において、Z軸は必ず表面から離れる方向を指します。接線空間によるノーマルマップは他の"オブジェクト空間"を用いたノーマルマップと比較すると効果ですが、いくつかの長所があります:
- It's possible to use them on deforming models - the bumps will remain on the deforming surface and will just work.Deforming model(変形モデル)で使用出来ます、Deforming surface(変形平面)のバンプは維持され、正しく機能します。
- モデルの異なる部分にノーマルマップのパーツを再利用することが出来ます、あるいは別のモデルに使用できます。
Specular(鏡面)プロパティ
Specular(鏡面)はDiffuse(拡散)と同様のシンプル(Lambertian)ライティングを使用するのに加えてビューア依存の鏡面ハイライトを計算します。Blinn-Phongライティングモデルと呼ばれます。鏡面のハイライトは、表面の角度、ライトの角度、およびビューアングル、に依存します。ハイライトは実際にはリアルタイム表現向きの、光源からブラーのかかった反射のシミュレーションです。ハイライトのブラーの度合いはInspectorのShininessスライダで制御されます。
これに加えて、メインのテクスチャのアルファチャネルは鏡面マップ(時々"Gloss Map"とも呼ばれます)として動作し、オブジェクトのどの領域が他の部分より反射するか定義します。アルファの黒い部分は鏡面反射がゼロとなり、白い領域は完全な鏡面反射となります。これはオブジェクトの異なるエリアで鏡面の反射レベルを変更したい場合に便利です。例えば、錆びた金属などは低い鏡面性を使用し、磨かれた金属は高い鏡面性を使用します。口紅は肌よりも鏡面性を高く、肌は綿の服よりも鏡面性を高くします。良く出来た鏡面マップはプレイヤーを関心させるのに大きな違いを生みます。
パフォーマンス
一般に、このシェーダでは安価でレンダリングできます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-30shader-NormalParallaxDiffuse

Parallax Normal mapped プロパティ
Parallax Normal mapped(視差ノーマルマップ)は通常のNormal mappedと同じであるが、デプスをより良くシミュレーションしています。この追加のデプス効果はHeight Map(高低マップ)を使用して得られます。Height Mapはノーマルマップのアルファチャネルに含まれます。アルファにおいて、黒はデプスがゼロで、白はデプスが最大値です。これは煉瓦や石で主に使用され間のクラックをより良く表現します。
Parallax mappingのテクニックは比較的簡単ですが、画像の乱れや異常なエフェクトが発生することがあります。具体的には、Height Mapでの急激な高低さの変化は避けるべきです。InspectorでHeightの値を調整することもオブジェクトの歪みにつながり、不自然で非現実的に見えることがあります。この理由から、Height Mapで穏やかな高低さの変化とすることと、Heightスライドバーを低い側に保つこと、を推奨します。
Diffuse(拡散) プロパティ
Diffuse(拡散)は簡単な (Lambertian(ランバート))ライティングモデルを計算します。表面のライティングは光源との角度が小さくなるにつれ弱まります。ライティングはこの角度のみに依存し、カメラの移動・回転による影響を受けません。
パフォーマンス
一般に、このシェーダではレンダリングが高価です。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-30shader-NormalParallaxSpecular

Parallax Normal mapped プロパティ
Parallax Normal mapped(視差ノーマルマップ)は通常のNormal mappedと同じであるが、デプスをより良くシミュレーションしています。この追加のデプス効果はHeight Map(高低マップ)を使用して得られます。Height Mapはノーマルマップのアルファチャネルに含まれます。アルファにおいて、黒はデプスがゼロで、白はデプスが最大値です。これは煉瓦や石で主に使用され間のクラックをより良く表現します。
Parallax mappingのテクニックは比較的簡単ですが、画像の乱れや異常なエフェクトが発生することがあります。具体的には、Height Mapでの急激な高低さの変化は避けるべきです。InspectorでHeightの値を調整することもオブジェクトの歪みにつながり、不自然で非現実的に見えることがあります。この理由から、Height Mapで穏やかな高低さの変化とすることと、Heightスライドバーを低い側に保つこと、を推奨します。
Specular(鏡面)プロパティ
Specular(鏡面)はDiffuse(拡散)と同様のシンプル(Lambertian)ライティングを使用するのに加えてビューア依存の鏡面ハイライトを計算します。Blinn-Phongライティングモデルと呼ばれます。鏡面のハイライトは、表面の角度、ライトの角度、およびビューアングル、に依存します。ハイライトは実際にはリアルタイム表現向きの、光源からブラーのかかった反射のシミュレーションです。ハイライトのブラーの度合いはInspectorのShininessスライダで制御されます。
これに加えて、メインのテクスチャのアルファチャネルは鏡面マップ(時々"Gloss Map"とも呼ばれます)として動作し、オブジェクトのどの領域が他の部分より反射するか定義します。アルファの黒い部分は鏡面反射がゼロとなり、白い領域は完全な鏡面反射となります。これはオブジェクトの異なるエリアで鏡面の反射レベルを変更したい場合に便利です。例えば、錆びた金属などは低い鏡面性を使用し、磨かれた金属は高い鏡面性を使用します。口紅は肌よりも鏡面性を高く、肌は綿の服よりも鏡面性を高くします。良く出来た鏡面マップはプレイヤーを関心させるのに大きな違いを生みます。
パフォーマンス
一般に、このシェーダではレンダリングが高価です。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-30shader-NormalDecal

Decalプロパティ
このシェーダは、VertexLit シェーダの一種です。このシェーダを照らすライトはすべて、このシェーダによって頂点ライトとしてレンダリングされます。メインテクスチャに加え、このシェーダは、更に細部を追加するために、2つ目のシェーダを使用します。2つめの「Decal」テクスチャは、アルファチャンネルャを使用して、メイン テクスチャの見えるエリアを決定します。Decalテクスチャは、メインテクスチャを補足する必要があります。例えば、レンガの壁がある場合、メインテクスチャとしてレンガのテクスチャを貼ることができ、Decalテクスチャをアルファチャンネルと併用して、壁の異なる場所に落書きを描きます。
パフォーマンス
このシェーダは、VertexLit シェーダとほぼ同じです。このシェーダは、2つ目のDecalテクスチャにより若干コストがかかりますが、目立つような影響はありません。
Page last updated: 2012-11-30shader-NormalDiffuseDetail

Diffuse Detailプロパティ
このシェーダは通常のDiffuse(拡散)シェーダの追加データをもつバージョンです。これにより二つ目の"Detail"(詳細)テクスチャを定義し、カメラが近づくにつれて徐々に表示されます。これは例えば、Terrain(地形)に用いることが出来ます。低解像度のベーステクスチャを用いて、Terrain(地形)全体に引き伸ばすことが出来ます。カメラが近づくと、低解像度のテクスチャはブラーがかかるので、これは回避しないといけません。このエフェクトを避けるには、汎用的なDetailテクスチャを作成し、地形上をタイルとして並べます。これにより、カメラが近づくと、追加のDetailが表示され、ブラーがかかることを回避できます。
Detailテクスチャはベーステクスチャ"の上に"置かれます。Detailテクスチャの暗い色はメインテクスチャを暗くし、明るい色は明るくします。Detailテクスチャは通常灰色がかっています。効果的なDetailテクスチャのの作成方法についてはthis page を参照下さい。
パフォーマンス
このシェーダはpixel-litであり、Diffuseシェーダとほぼ同等です。わずかながらにこの追加のテクスチャの分だけ高価になります。
Page last updated: 2012-11-30shader-TransparentFamily
透明シェーダは、完全に透明または半透明のオブジェクトに使用します。 「Base」テクスチャのアルファ チャンネルを使用すると、他方よりも透明度の多い、あるいは少ないオブジェクトのエリアを決定できます。 これにより、草や HUD インターフェースまたは SCI-FI 効果用の素晴らしい効果を作成できます。
Transparent Vertex-Lit
「必要なアセット」
- 透明マップのあるアルファ チャンネルのある 1 つの「Base」テクスチャ
Transparent Diffuse
「必要なアセット」
- 透明マップのあるアルファ チャンネルのある 1 つの「Base」テクスチャ
Transparent Specular
「必要なアセット」
- 結合した透明マップ/スペキュラ マップのあるアルファ チャンネルのある 1 つの「Base」テクスチャ
「注意:」 このシェーダの制限の 1 つは、このファミリ内のスペキュラ シェーダに対するスペキュラ マップとして、「Base」テクスチャのアルファ チャンネルが 2 倍になることです。
Transparent Normal mapped
「必要なアセット」
- 透明マップのあるアルファ チャンネルのある 1 つの「Base」テクスチャ
- 1 つの「Normal map」法線マップ、アルファ チャンネルは不要
Transparent Normal mapped Specular
「必要なアセット」
- 結合した透明マップ/スペキュラ マップのあるアルファ チャンネルのある 1 つの「Base」テクスチャ
- 1 つの「Normal map」法線マップ、アルファ チャンネルは不要
「注意:」 このシェーダの制限の 1 つは、このファミリ内のスペキュラ シェーダに対するスペキュラ マップとして、「Base」テクスチャのアルファ チャンネルが 2 倍になることです。
Transparent Parallax
「必要なアセット」
- 透明マップのあるアルファ チャンネルのある 1 つの「Base」テクスチャ
- 視差深さに対するアルファ チャンネルのある 1 つの「Normal map」法線マップ。
Transparent Parallax Specular
「必要なアセット」
- 結合した透明マップ/スペキュラ マップのあるアルファ チャンネルのある 1 つの「Base」テクスチャ
- 視差深さに対するアルファ チャンネルのある 1 つの「Normal map」法線マップ。
「注意:」 このシェーダの制限の 1 つは、このファミリ内のスペキュラ シェーダに対するスペキュラ マップとして、「Base」テクスチャのアルファ チャンネルが 2 倍になることです。
Page last updated: 2012-11-11shader-TransVertexLit

Transparentプロパティ
このシェーダは、メインのテクスチャのアルファチャネルを読み取って、メッシュ形状の一部あるいは全てを透過に出来ます。アルファにおいてゼロ(黒)は完全に透過であり、255(白)は完全に不透明です。メインテクスチャにアルファチャネルがない場合、オブジェクトは完全に不透明に表示されます。
ゲームで透過のあるオブジェクトを使用するのは巧妙にする必要がありで、過去からあるグラフィックス プログラミングの問題でゲームにソート問題が発生かもしれないためです。例えば、同時に2つの窓を通してみたときにおかしな結果が得られた場合は、透過性を使うクラシックな問題に突き当たっています。原則は、いくつかのケースでは透過のあるオブジェクトが別のオブジェクトの前に不自然に描かれるかもしれないことを把握する必要があり、特にオブジェクトが交差していたり、どちらかを、囲っていたり、サイズが大幅に異なる場合、などです。この理由から、透過オブジェクトは必要な時だけ使用し、過剰に使用しないことです。デザイナーにも透過の問題が発生しうることを認識してもらい、いざという場合にはデザイン変更を準備してこの課題を回避することです。
Vertex-Litプロパティ
このシェーダはVertex-Lit(頂点ライト)、でありもっともシンプルなシェーダのひとつです。これに照らされるライトはひとつのパスでレンダリングされ頂点のみで計算されます。
Vertex-Lit(頂点ライト)であるため、light cookie、法線マッピング、シャドウ(影)といった、ピクセルベースのレンダリングエフェクトは描画されません。このシェーダはモデルのテッセレーションに敏感です。もしこのシェーダを使ったキューブのきわめて近くにポイントライトを置いた場合、ライトは頂点のみで計算されます。Pixel-litシェーダは、テッセレーションと独立して、丸いハイライトを作成するのに効果を発揮します。望む効果がそれであれば、むしろPixel-litシェーダを使用するかオブジェクトのテッセレーションを増やすことを考慮すべきです。
パフォーマンス
一般に、このシェーダでは少ない費用でレンダリングできます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-13shader-TransDiffuse

Transparentプロパティ
このシェーダは、メインのテクスチャのアルファチャネルを読み取って、メッシュ形状の一部あるいは全てを透過に出来ます。アルファにおいてゼロ(黒)は完全に透過であり、255(白)は完全に不透明です。メインテクスチャにアルファチャネルがない場合、オブジェクトは完全に不透明に表示されます。
ゲームで透過のあるオブジェクトを使用するのは巧妙にする必要がありで、過去からあるグラフィックス プログラミングの問題でゲームにソート問題が発生かもしれないためです。例えば、同時に2つの窓を通してみたときにおかしな結果が得られた場合は、透過性を使うクラシックな問題に突き当たっています。原則は、いくつかのケースでは透過のあるオブジェクトが別のオブジェクトの前に不自然に描かれるかもしれないことを把握する必要があり、特にオブジェクトが交差していたり、どちらかを、囲っていたり、サイズが大幅に異なる場合、などです。この理由から、透過オブジェクトは必要な時だけ使用し、過剰に使用しないことです。デザイナーにも透過の問題が発生しうることを認識してもらい、いざという場合にはデザイン変更を準備してこの課題を回避することです。
Diffuse(拡散) プロパティ
Diffuse(拡散)は簡単な (Lambertian(ランバート))ライティングモデルを計算します。表面のライティングは光源との角度が小さくなるにつれ弱まります。ライティングはこの角度のみに依存し、カメラの移動・回転による影響を受けません。
パフォーマンス
一般に、このシェーダでは少ない費用でレンダリングできます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-13shader-TransSpecular

このシェーダの考慮点はベーステクスチャのアルファチャネルが透過領域およびSpecular Map(鏡面マップ)ともに定義することです。
Transparentプロパティ
このシェーダは、メインのテクスチャのアルファチャネルを読み取って、メッシュ形状の一部あるいは全てを透過に出来ます。アルファにおいてゼロ(黒)は完全に透過であり、255(白)は完全に不透明です。メインテクスチャにアルファチャネルがない場合、オブジェクトは完全に不透明に表示されます。
ゲームで透過のあるオブジェクトを使用するのは巧妙にする必要がありで、過去からあるグラフィックス プログラミングの問題でゲームにソート問題が発生かもしれないためです。例えば、同時に2つの窓を通してみたときにおかしな結果が得られた場合は、透過性を使うクラシックな問題に突き当たっています。原則は、いくつかのケースでは透過のあるオブジェクトが別のオブジェクトの前に不自然に描かれるかもしれないことを把握する必要があり、特にオブジェクトが交差していたり、どちらかを、囲っていたり、サイズが大幅に異なる場合、などです。この理由から、透過オブジェクトは必要な時だけ使用し、過剰に使用しないことです。デザイナーにも透過の問題が発生しうることを認識してもらい、いざという場合にはデザイン変更を準備してこの課題を回避することです。
Specular(鏡面)プロパティ
Specular(鏡面)はDiffuse(拡散)と同様のシンプル(Lambertian)ライティングを使用するのに加えてビューア依存の鏡面ハイライトを計算します。Blinn-Phongライティングモデルと呼ばれます。鏡面のハイライトは、表面の角度、ライトの角度、およびビューアングル、に依存します。ハイライトは実際にはリアルタイム表現向きの、光源からブラーのかかった反射のシミュレーションです。ハイライトのブラーの度合いはInspectorのShininessスライダで制御されます。
これに加えて、メインのテクスチャのアルファチャネルは鏡面マップ(時々"Gloss Map"とも呼ばれます)として動作し、オブジェクトのどの領域が他の部分より反射するか定義します。アルファの黒い部分は鏡面反射がゼロとなり、白い領域は完全な鏡面反射となります。これはオブジェクトの異なるエリアで鏡面の反射レベルを変更したい場合に便利です。例えば、錆びた金属などは低い鏡面性を使用し、磨かれた金属は高い鏡面性を使用します。口紅は肌よりも鏡面性を高く、肌は綿の服よりも鏡面性を高くします。良く出来た鏡面マップはプレイヤーを関心させるのに大きな違いを生みます。
パフォーマンス
一般的にこのシェーダはややレンダリングが高価です。詳細についてはShader Peformance page を参照下さい。
Page last updated: 2012-11-30shader-TransBumped Diffuse

Transparentプロパティ
このシェーダは、メインのテクスチャのアルファチャネルを読み取って、メッシュ形状の一部あるいは全てを透過に出来ます。アルファにおいてゼロ(黒)は完全に透過であり、255(白)は完全に不透明です。メインテクスチャにアルファチャネルがない場合、オブジェクトは完全に不透明に表示されます。
ゲームで透過のあるオブジェクトを使用するのは巧妙にする必要がありで、過去からあるグラフィックス プログラミングの問題でゲームにソート問題が発生かもしれないためです。例えば、同時に2つの窓を通してみたときにおかしな結果が得られた場合は、透過性を使うクラシックな問題に突き当たっています。原則は、いくつかのケースでは透過のあるオブジェクトが別のオブジェクトの前に不自然に描かれるかもしれないことを把握する必要があり、特にオブジェクトが交差していたり、どちらかを、囲っていたり、サイズが大幅に異なる場合、などです。この理由から、透過オブジェクトは必要な時だけ使用し、過剰に使用しないことです。デザイナーにも透過の問題が発生しうることを認識してもらい、いざという場合にはデザイン変更を準備してこの課題を回避することです。
ノーマルマップ プロパティ
Diffuse(拡散)シェーダと同様のシンプル(Lambertian)ライティングを使用します。表面と光がなす角度が小さくなるほどライティングが弱くなります。ライティングは角度のみに依存し、カメラの移動や回転による変更の影響は受けません。
Normal mapping(ノーマルマップ)は細かい表面のディテールを、ポリゴンの量を増やして詳細を削り出すのではなく、テクスチャにより表現します。オブジェクトの形状は変更しないが、Normal Map(ノーマルマップ)と呼ばれる特殊なテクスチャを使用してこのエフェクトを得ます。ノーマルマップでは、各ピクセルのカラー値は表面法線の角度を表します。次にこの値を形状の代りに使用してライティングを行います。オブジェクトのライティングを計算するにあたり、効果的にノーマルマップによってメッシュの形状の影響をオーバーライドします。
ノーマルマップの作成
通常のグレースケールの画像をインポートし、Unityの中でノーマルマップへと変換できます。これを行う方法については[[HOWTO-bumpmap | Normal map FAQ page] を参照下さい。
技術的な詳細
ノーマルマップは接線空間(Tangent Space)を用いたノーマルマップです。接線空間とはモデル形状の"表面に沿った"空間です。この空間において、Z軸は必ず表面から離れる方向を指します。接線空間によるノーマルマップは他の"オブジェクト空間"を用いたノーマルマップと比較すると効果ですが、いくつかの長所があります:
- It's possible to use them on deforming models - the bumps will remain on the deforming surface and will just work.Deforming model(変形モデル)で使用出来ます、Deforming surface(変形平面)のバンプは維持され、正しく機能します。
- モデルの異なる部分にノーマルマップのパーツを再利用することが出来ます、あるいは別のモデルに使用できます。
Diffuse(拡散) プロパティ
Diffuse(拡散)は簡単な (Lambertian(ランバート))ライティングモデルを計算します。表面のライティングは光源との角度が小さくなるにつれ弱まります。ライティングはこの角度のみに依存し、カメラの移動・回転による影響を受けません。
パフォーマンス
一般に、このシェーダでは少ない費用でレンダリングできます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-13shader-TransBumped Specular

One consideration for this shader is that the Base texture's alpha channel defines both the Transparent areas as well as the Specular Map.
Transparentプロパティ
このシェーダは、メインのテクスチャのアルファチャネルを読み取って、メッシュ形状の一部あるいは全てを透過に出来ます。アルファにおいてゼロ(黒)は完全に透過であり、255(白)は完全に不透明です。メインテクスチャにアルファチャネルがない場合、オブジェクトは完全に不透明に表示されます。
ゲームで透過のあるオブジェクトを使用するのは巧妙にする必要がありで、過去からあるグラフィックス プログラミングの問題でゲームにソート問題が発生かもしれないためです。例えば、同時に2つの窓を通してみたときにおかしな結果が得られた場合は、透過性を使うクラシックな問題に突き当たっています。原則は、いくつかのケースでは透過のあるオブジェクトが別のオブジェクトの前に不自然に描かれるかもしれないことを把握する必要があり、特にオブジェクトが交差していたり、どちらかを、囲っていたり、サイズが大幅に異なる場合、などです。この理由から、透過オブジェクトは必要な時だけ使用し、過剰に使用しないことです。デザイナーにも透過の問題が発生しうることを認識してもらい、いざという場合にはデザイン変更を準備してこの課題を回避することです。
ノーマルマップ プロパティ
Diffuse(拡散)シェーダと同様のシンプル(Lambertian)ライティングを使用します。表面と光がなす角度が小さくなるほどライティングが弱くなります。ライティングは角度のみに依存し、カメラの移動や回転による変更の影響は受けません。
Normal mapping(ノーマルマップ)は細かい表面のディテールを、ポリゴンの量を増やして詳細を削り出すのではなく、テクスチャにより表現します。オブジェクトの形状は変更しないが、Normal Map(ノーマルマップ)と呼ばれる特殊なテクスチャを使用してこのエフェクトを得ます。ノーマルマップでは、各ピクセルのカラー値は表面法線の角度を表します。次にこの値を形状の代りに使用してライティングを行います。オブジェクトのライティングを計算するにあたり、効果的にノーマルマップによってメッシュの形状の影響をオーバーライドします。
ノーマルマップの作成
通常のグレースケールの画像をインポートし、Unityの中でノーマルマップへと変換できます。これを行う方法については[[HOWTO-bumpmap | Normal map FAQ page] を参照下さい。
技術的な詳細
ノーマルマップは接線空間(Tangent Space)を用いたノーマルマップです。接線空間とはモデル形状の"表面に沿った"空間です。この空間において、Z軸は必ず表面から離れる方向を指します。接線空間によるノーマルマップは他の"オブジェクト空間"を用いたノーマルマップと比較すると効果ですが、いくつかの長所があります:
- It's possible to use them on deforming models - the bumps will remain on the deforming surface and will just work.Deforming model(変形モデル)で使用出来ます、Deforming surface(変形平面)のバンプは維持され、正しく機能します。
- モデルの異なる部分にノーマルマップのパーツを再利用することが出来ます、あるいは別のモデルに使用できます。
Specular(鏡面)プロパティ
Specular(鏡面)はDiffuse(拡散)と同様のシンプル(Lambertian)ライティングを使用するのに加えてビューア依存の鏡面ハイライトを計算します。Blinn-Phongライティングモデルと呼ばれます。鏡面のハイライトは、表面の角度、ライトの角度、およびビューアングル、に依存します。ハイライトは実際にはリアルタイム表現向きの、光源からブラーのかかった反射のシミュレーションです。ハイライトのブラーの度合いはInspectorのShininessスライダで制御されます。
これに加えて、メインのテクスチャのアルファチャネルは鏡面マップ(時々"Gloss Map"とも呼ばれます)として動作し、オブジェクトのどの領域が他の部分より反射するか定義します。アルファの黒い部分は鏡面反射がゼロとなり、白い領域は完全な鏡面反射となります。これはオブジェクトの異なるエリアで鏡面の反射レベルを変更したい場合に便利です。例えば、錆びた金属などは低い鏡面性を使用し、磨かれた金属は高い鏡面性を使用します。口紅は肌よりも鏡面性を高く、肌は綿の服よりも鏡面性を高くします。良く出来た鏡面マップはプレイヤーを関心させるのに大きな違いを生みます。
Performance
Generally, this shader is moderately expensive to render. For more details, please view the Shader Peformance page.
Page last updated: 2007-05-08shader-TransParallax Diffuse

Transparentプロパティ
このシェーダは、メインのテクスチャのアルファチャネルを読み取って、メッシュ形状の一部あるいは全てを透過に出来ます。アルファにおいてゼロ(黒)は完全に透過であり、255(白)は完全に不透明です。メインテクスチャにアルファチャネルがない場合、オブジェクトは完全に不透明に表示されます。
ゲームで透過のあるオブジェクトを使用するのは巧妙にする必要がありで、過去からあるグラフィックス プログラミングの問題でゲームにソート問題が発生かもしれないためです。例えば、同時に2つの窓を通してみたときにおかしな結果が得られた場合は、透過性を使うクラシックな問題に突き当たっています。原則は、いくつかのケースでは透過のあるオブジェクトが別のオブジェクトの前に不自然に描かれるかもしれないことを把握する必要があり、特にオブジェクトが交差していたり、どちらかを、囲っていたり、サイズが大幅に異なる場合、などです。この理由から、透過オブジェクトは必要な時だけ使用し、過剰に使用しないことです。デザイナーにも透過の問題が発生しうることを認識してもらい、いざという場合にはデザイン変更を準備してこの課題を回避することです。
Parallax Normal mapped プロパティ
Parallax Normal mapped(視差ノーマルマップ)は通常のNormal mappedと同じであるが、デプスをより良くシミュレーションしています。この追加のデプス効果はHeight Map(高低マップ)を使用して得られます。Height Mapはノーマルマップのアルファチャネルに含まれます。アルファにおいて、黒はデプスがゼロで、白はデプスが最大値です。これは煉瓦や石で主に使用され間のクラックをより良く表現します。
Parallax mappingのテクニックは比較的簡単ですが、画像の乱れや異常なエフェクトが発生することがあります。具体的には、Height Mapでの急激な高低さの変化は避けるべきです。InspectorでHeightの値を調整することもオブジェクトの歪みにつながり、不自然で非現実的に見えることがあります。この理由から、Height Mapで穏やかな高低さの変化とすることと、Heightスライドバーを低い側に保つこと、を推奨します。
Diffuse(拡散) プロパティ
Diffuse(拡散)は簡単な (Lambertian(ランバート))ライティングモデルを計算します。表面のライティングは光源との角度が小さくなるにつれ弱まります。ライティングはこの角度のみに依存し、カメラの移動・回転による影響を受けません。
Performance
Generally, this shader is on the more expensive rendering side. For more details, please view the Shader Peformance page.
Page last updated: 2007-05-08shader-TransParallax Specular

One consideration for this shader is that the Base texture's alpha channel defines both the Transparent areas as well as the Specular Map.
Transparentプロパティ
このシェーダは、メインのテクスチャのアルファチャネルを読み取って、メッシュ形状の一部あるいは全てを透過に出来ます。アルファにおいてゼロ(黒)は完全に透過であり、255(白)は完全に不透明です。メインテクスチャにアルファチャネルがない場合、オブジェクトは完全に不透明に表示されます。
ゲームで透過のあるオブジェクトを使用するのは巧妙にする必要がありで、過去からあるグラフィックス プログラミングの問題でゲームにソート問題が発生かもしれないためです。例えば、同時に2つの窓を通してみたときにおかしな結果が得られた場合は、透過性を使うクラシックな問題に突き当たっています。原則は、いくつかのケースでは透過のあるオブジェクトが別のオブジェクトの前に不自然に描かれるかもしれないことを把握する必要があり、特にオブジェクトが交差していたり、どちらかを、囲っていたり、サイズが大幅に異なる場合、などです。この理由から、透過オブジェクトは必要な時だけ使用し、過剰に使用しないことです。デザイナーにも透過の問題が発生しうることを認識してもらい、いざという場合にはデザイン変更を準備してこの課題を回避することです。
Parallax Normal mapped プロパティ
Parallax Normal mapped(視差ノーマルマップ)は通常のNormal mappedと同じであるが、デプスをより良くシミュレーションしています。この追加のデプス効果はHeight Map(高低マップ)を使用して得られます。Height Mapはノーマルマップのアルファチャネルに含まれます。アルファにおいて、黒はデプスがゼロで、白はデプスが最大値です。これは煉瓦や石で主に使用され間のクラックをより良く表現します。
Parallax mappingのテクニックは比較的簡単ですが、画像の乱れや異常なエフェクトが発生することがあります。具体的には、Height Mapでの急激な高低さの変化は避けるべきです。InspectorでHeightの値を調整することもオブジェクトの歪みにつながり、不自然で非現実的に見えることがあります。この理由から、Height Mapで穏やかな高低さの変化とすることと、Heightスライドバーを低い側に保つこと、を推奨します。
Specular(鏡面)プロパティ
Specular(鏡面)はDiffuse(拡散)と同様のシンプル(Lambertian)ライティングを使用するのに加えてビューア依存の鏡面ハイライトを計算します。Blinn-Phongライティングモデルと呼ばれます。鏡面のハイライトは、表面の角度、ライトの角度、およびビューアングル、に依存します。ハイライトは実際にはリアルタイム表現向きの、光源からブラーのかかった反射のシミュレーションです。ハイライトのブラーの度合いはInspectorのShininessスライダで制御されます。
これに加えて、メインのテクスチャのアルファチャネルは鏡面マップ(時々"Gloss Map"とも呼ばれます)として動作し、オブジェクトのどの領域が他の部分より反射するか定義します。アルファの黒い部分は鏡面反射がゼロとなり、白い領域は完全な鏡面反射となります。これはオブジェクトの異なるエリアで鏡面の反射レベルを変更したい場合に便利です。例えば、錆びた金属などは低い鏡面性を使用し、磨かれた金属は高い鏡面性を使用します。口紅は肌よりも鏡面性を高く、肌は綿の服よりも鏡面性を高くします。良く出来た鏡面マップはプレイヤーを関心させるのに大きな違いを生みます。
Performance
Generally, this shader is on the more expensive rendering side. For more details, please view the Shader Peformance page.
Page last updated: 2007-05-08shader-TransparentCutoutFamily
The Transparent Cutout shaders are used for objects that have fully opaque and fully transparent parts (no partial transparency). Things like chain fences, trees, grass, etc.
Transparent Cutout Vertex-Lit
Assets needed:
- One Base texture with alpha channel for Transparency Map
Transparent Cutout Diffuse
Assets needed:
- One Base texture with alpha channel for Transparency Map
Transparent Cutout Specular
Assets needed:
- One Base texture with alpha channel for combined Transparency Map/Specular Map
Note: One limitation of this shader is that the Base texture's alpha channel doubles as a Specular Map for the Specular shaders in this family.
Transparent Cutout Bumped
Assets needed:
- One Base texture with alpha channel for Transparency Map
- One Normal map normal map, no alpha channel required
Transparent Cutout Bumped Specular
Assets needed:
- One Base texture with alpha channel for combined Transparency Map/Specular Map
- One Normal map normal map, no alpha channel required
Note: One limitation of this shader is that the Base texture's alpha channel doubles as a Specular Map for the Specular shaders in this family.
Page last updated: 2010-07-14shader-TransCutVertexLit

Transparent Cutout Properties
Cutout shader is an alternative way of displaying transparent objects. Differences between Cutout and regular Transparent shaders are:
- This shader cannot have partially transparent areas. Everything will be either fully opaque or fully transparent.
- Objects using this shader can cast and receive shadows!
- The graphical sorting problems normally associated with Transparent shaders do not occur when using this shader.
This shader uses an alpha channel contained in the Base Texture to determine the transparent areas. If the alpha contains a blend between transparent and opaque areas, you can manually determine the cutoff point for the which areas will be shown. You change this cutoff by adjusting the Alpha Cutoff slider.
Vertex-Litプロパティ
このシェーダはVertex-Lit(頂点ライト)、でありもっともシンプルなシェーダのひとつです。これに照らされるライトはひとつのパスでレンダリングされ頂点のみで計算されます。
Vertex-Lit(頂点ライト)であるため、light cookie、法線マッピング、シャドウ(影)といった、ピクセルベースのレンダリングエフェクトは描画されません。このシェーダはモデルのテッセレーションに敏感です。もしこのシェーダを使ったキューブのきわめて近くにポイントライトを置いた場合、ライトは頂点のみで計算されます。Pixel-litシェーダは、テッセレーションと独立して、丸いハイライトを作成するのに効果を発揮します。望む効果がそれであれば、むしろPixel-litシェーダを使用するかオブジェクトのテッセレーションを増やすことを考慮すべきです。
パフォーマンス
一般に、このシェーダでは少ない費用でレンダリングできます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-13shader-TransCutDiffuse

Transparent Cutout Properties
Cutout shader is an alternative way of displaying transparent objects. Differences between Cutout and regular Transparent shaders are:
- This shader cannot have partially transparent areas. Everything will be either fully opaque or fully transparent.
- Objects using this shader can cast and receive shadows!
- The graphical sorting problems normally associated with Transparent shaders do not occur when using this shader.
This shader uses an alpha channel contained in the Base Texture to determine the transparent areas. If the alpha contains a blend between transparent and opaque areas, you can manually determine the cutoff point for the which areas will be shown. You change this cutoff by adjusting the Alpha Cutoff slider.
Diffuse(拡散) プロパティ
Diffuse(拡散)は簡単な (Lambertian(ランバート))ライティングモデルを計算します。表面のライティングは光源との角度が小さくなるにつれ弱まります。ライティングはこの角度のみに依存し、カメラの移動・回転による影響を受けません。
パフォーマンス
一般に、このシェーダでは少ない費用でレンダリングできます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-13shader-TransCutSpecular

One consideration for this shader is that the Base texture's alpha channel defines both the Transparent areas as well as the Specular Map.
Transparent Cutout Properties
Cutout shader is an alternative way of displaying transparent objects. Differences between Cutout and regular Transparent shaders are:
- This shader cannot have partially transparent areas. Everything will be either fully opaque or fully transparent.
- Objects using this shader can cast and receive shadows!
- The graphical sorting problems normally associated with Transparent shaders do not occur when using this shader.
This shader uses an alpha channel contained in the Base Texture to determine the transparent areas. If the alpha contains a blend between transparent and opaque areas, you can manually determine the cutoff point for the which areas will be shown. You change this cutoff by adjusting the Alpha Cutoff slider.
Specular(鏡面)プロパティ
Specular(鏡面)はDiffuse(拡散)と同様のシンプル(Lambertian)ライティングを使用するのに加えてビューア依存の鏡面ハイライトを計算します。Blinn-Phongライティングモデルと呼ばれます。鏡面のハイライトは、表面の角度、ライトの角度、およびビューアングル、に依存します。ハイライトは実際にはリアルタイム表現向きの、光源からブラーのかかった反射のシミュレーションです。ハイライトのブラーの度合いはInspectorのShininessスライダで制御されます。
これに加えて、メインのテクスチャのアルファチャネルは鏡面マップ(時々"Gloss Map"とも呼ばれます)として動作し、オブジェクトのどの領域が他の部分より反射するか定義します。アルファの黒い部分は鏡面反射がゼロとなり、白い領域は完全な鏡面反射となります。これはオブジェクトの異なるエリアで鏡面の反射レベルを変更したい場合に便利です。例えば、錆びた金属などは低い鏡面性を使用し、磨かれた金属は高い鏡面性を使用します。口紅は肌よりも鏡面性を高く、肌は綿の服よりも鏡面性を高くします。良く出来た鏡面マップはプレイヤーを関心させるのに大きな違いを生みます。
Performance
Generally, this shader is moderately expensive to render. For more details, please view the Shader Peformance page.
Page last updated: 2007-05-19shader-TransCutBumpedDiffuse

Transparent Cutout Properties
Cutout shader is an alternative way of displaying transparent objects. Differences between Cutout and regular Transparent shaders are:
- This shader cannot have partially transparent areas. Everything will be either fully opaque or fully transparent.
- Objects using this shader can cast and receive shadows!
- The graphical sorting problems normally associated with Transparent shaders do not occur when using this shader.
This shader uses an alpha channel contained in the Base Texture to determine the transparent areas. If the alpha contains a blend between transparent and opaque areas, you can manually determine the cutoff point for the which areas will be shown. You change this cutoff by adjusting the Alpha Cutoff slider.
ノーマルマップ プロパティ
Diffuse(拡散)シェーダと同様のシンプル(Lambertian)ライティングを使用します。表面と光がなす角度が小さくなるほどライティングが弱くなります。ライティングは角度のみに依存し、カメラの移動や回転による変更の影響は受けません。
Normal mapping(ノーマルマップ)は細かい表面のディテールを、ポリゴンの量を増やして詳細を削り出すのではなく、テクスチャにより表現します。オブジェクトの形状は変更しないが、Normal Map(ノーマルマップ)と呼ばれる特殊なテクスチャを使用してこのエフェクトを得ます。ノーマルマップでは、各ピクセルのカラー値は表面法線の角度を表します。次にこの値を形状の代りに使用してライティングを行います。オブジェクトのライティングを計算するにあたり、効果的にノーマルマップによってメッシュの形状の影響をオーバーライドします。
ノーマルマップの作成
通常のグレースケールの画像をインポートし、Unityの中でノーマルマップへと変換できます。これを行う方法については[[HOWTO-bumpmap | Normal map FAQ page] を参照下さい。
技術的な詳細
ノーマルマップは接線空間(Tangent Space)を用いたノーマルマップです。接線空間とはモデル形状の"表面に沿った"空間です。この空間において、Z軸は必ず表面から離れる方向を指します。接線空間によるノーマルマップは他の"オブジェクト空間"を用いたノーマルマップと比較すると効果ですが、いくつかの長所があります:
- It's possible to use them on deforming models - the bumps will remain on the deforming surface and will just work.Deforming model(変形モデル)で使用出来ます、Deforming surface(変形平面)のバンプは維持され、正しく機能します。
- モデルの異なる部分にノーマルマップのパーツを再利用することが出来ます、あるいは別のモデルに使用できます。
Diffuse(拡散) プロパティ
Diffuse(拡散)は簡単な (Lambertian(ランバート))ライティングモデルを計算します。表面のライティングは光源との角度が小さくなるにつれ弱まります。ライティングはこの角度のみに依存し、カメラの移動・回転による影響を受けません。
パフォーマンス
一般に、このシェーダでは少ない費用でレンダリングできます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-13shader-TransCutBumpedSpecular

One consideration for this shader is that the Base texture's alpha channel defines both the Transparent areas as well as the Specular Map.
Transparent Cutout Properties
Cutout shader is an alternative way of displaying transparent objects. Differences between Cutout and regular Transparent shaders are:
- This shader cannot have partially transparent areas. Everything will be either fully opaque or fully transparent.
- Objects using this shader can cast and receive shadows!
- The graphical sorting problems normally associated with Transparent shaders do not occur when using this shader.
This shader uses an alpha channel contained in the Base Texture to determine the transparent areas. If the alpha contains a blend between transparent and opaque areas, you can manually determine the cutoff point for the which areas will be shown. You change this cutoff by adjusting the Alpha Cutoff slider.
ノーマルマップ プロパティ
Diffuse(拡散)シェーダと同様のシンプル(Lambertian)ライティングを使用します。表面と光がなす角度が小さくなるほどライティングが弱くなります。ライティングは角度のみに依存し、カメラの移動や回転による変更の影響は受けません。
Normal mapping(ノーマルマップ)は細かい表面のディテールを、ポリゴンの量を増やして詳細を削り出すのではなく、テクスチャにより表現します。オブジェクトの形状は変更しないが、Normal Map(ノーマルマップ)と呼ばれる特殊なテクスチャを使用してこのエフェクトを得ます。ノーマルマップでは、各ピクセルのカラー値は表面法線の角度を表します。次にこの値を形状の代りに使用してライティングを行います。オブジェクトのライティングを計算するにあたり、効果的にノーマルマップによってメッシュの形状の影響をオーバーライドします。
ノーマルマップの作成
通常のグレースケールの画像をインポートし、Unityの中でノーマルマップへと変換できます。これを行う方法については[[HOWTO-bumpmap | Normal map FAQ page] を参照下さい。
技術的な詳細
ノーマルマップは接線空間(Tangent Space)を用いたノーマルマップです。接線空間とはモデル形状の"表面に沿った"空間です。この空間において、Z軸は必ず表面から離れる方向を指します。接線空間によるノーマルマップは他の"オブジェクト空間"を用いたノーマルマップと比較すると効果ですが、いくつかの長所があります:
- It's possible to use them on deforming models - the bumps will remain on the deforming surface and will just work.Deforming model(変形モデル)で使用出来ます、Deforming surface(変形平面)のバンプは維持され、正しく機能します。
- モデルの異なる部分にノーマルマップのパーツを再利用することが出来ます、あるいは別のモデルに使用できます。
Specular(鏡面)プロパティ
Specular(鏡面)はDiffuse(拡散)と同様のシンプル(Lambertian)ライティングを使用するのに加えてビューア依存の鏡面ハイライトを計算します。Blinn-Phongライティングモデルと呼ばれます。鏡面のハイライトは、表面の角度、ライトの角度、およびビューアングル、に依存します。ハイライトは実際にはリアルタイム表現向きの、光源からブラーのかかった反射のシミュレーションです。ハイライトのブラーの度合いはInspectorのShininessスライダで制御されます。
これに加えて、メインのテクスチャのアルファチャネルは鏡面マップ(時々"Gloss Map"とも呼ばれます)として動作し、オブジェクトのどの領域が他の部分より反射するか定義します。アルファの黒い部分は鏡面反射がゼロとなり、白い領域は完全な鏡面反射となります。これはオブジェクトの異なるエリアで鏡面の反射レベルを変更したい場合に便利です。例えば、錆びた金属などは低い鏡面性を使用し、磨かれた金属は高い鏡面性を使用します。口紅は肌よりも鏡面性を高く、肌は綿の服よりも鏡面性を高くします。良く出来た鏡面マップはプレイヤーを関心させるのに大きな違いを生みます。
Performance
Generally, this shader is moderately expensive to render. For more details, please view the Shader Peformance page.
Page last updated: 2007-05-19shader-SelfIllumFamily
The Self-Illuminated shaders will emit light only onto themselves based on an attached alpha channel. They do not require any Lights to shine on them to emit this light. Any vertex lights or pixel lights will simply add more light on top of the self-illumination.
This is mostly used for light emitting objects. For example, parts of the wall texture could be self-illuminated to simulate lights or displays. It can also be useful to light power-up objects that should always have consistent lighting throughout the game, regardless of the lights shining on it.
Self-Illuminated Vertex-Lit
Assets needed:
- One Base texture, no alpha channel required
- One Illumination texture with alpha channel for Illumination Map
Self-Illuminated Diffuse
Assets needed:
- One Base texture, no alpha channel required
- One Illumination texture with alpha channel for Illumination Map
Self-Illuminated Specular
Assets needed:
- One Base texture with alpha channel for Specular Map
- One Illumination texture with alpha channel for Illumination Map
Self-Illuminated Bumped
Assets needed:
- One Base texture, no alpha channel required
- One Normal map normal map with alpha channel for Illumination
Self-Illuminated Bumped Specular
Assets needed:
- One Base texture with alpha channel for Specular Map
- One Normal map normal map with alpha channel for Illumination Map
Self-Illuminated Parallax
Assets needed:
- One Base texture, no alpha channel required
- One Normal map normal map with alpha channel for Illumination Map & Parallax Depth combined
Note: One consideration of this shader is that the Bumpmap texture's alpha channel doubles as a Illumination and the Parallax Depth.
Self-Illuminated Parallax Specular
Assets needed:
- One Base texture with alpha channel for Specular Map
- One Normal map normal map with alpha channel for Illumination Map & Parallax Depth combined
Note: One consideration of this shader is that the Bumpmap texture's alpha channel doubles as a Illumination and the Parallax Depth.
Page last updated: 2010-07-14shader-SelfIllumVertexLit

Self-Illuminated Properties
This shader allows you to define bright and dark parts of the object. The alpha channel of a secondary texture will define areas of the object that "emit" light by themselves, even when no light is shining on it. In the alpha channel, black is zero light, and white is full light emitted by the object. Any scene lights will add illumination on top of the shader's illumination. So even if your object does not emit any light by itself, it will still be lit by lights in your scene.
Vertex-Litプロパティ
このシェーダはVertex-Lit(頂点ライト)、でありもっともシンプルなシェーダのひとつです。これに照らされるライトはひとつのパスでレンダリングされ頂点のみで計算されます。
Vertex-Lit(頂点ライト)であるため、light cookie、法線マッピング、シャドウ(影)といった、ピクセルベースのレンダリングエフェクトは描画されません。このシェーダはモデルのテッセレーションに敏感です。もしこのシェーダを使ったキューブのきわめて近くにポイントライトを置いた場合、ライトは頂点のみで計算されます。Pixel-litシェーダは、テッセレーションと独立して、丸いハイライトを作成するのに効果を発揮します。望む効果がそれであれば、むしろPixel-litシェーダを使用するかオブジェクトのテッセレーションを増やすことを考慮すべきです。
パフォーマンス
一般に、このシェーダでは少ない費用でレンダリングできます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-13shader-SelfIllumDiffuse
自己発光デフューズ

Self-Illuminated Properties
This shader allows you to define bright and dark parts of the object. The alpha channel of a secondary texture will define areas of the object that "emit" light by themselves, even when no light is shining on it. In the alpha channel, black is zero light, and white is full light emitted by the object. Any scene lights will add illumination on top of the shader's illumination. So even if your object does not emit any light by itself, it will still be lit by lights in your scene.
Diffuse(拡散) プロパティ
Diffuse(拡散)は簡単な (Lambertian(ランバート))ライティングモデルを計算します。表面のライティングは光源との角度が小さくなるにつれ弱まります。ライティングはこの角度のみに依存し、カメラの移動・回転による影響を受けません。
パフォーマンス
一般に、このシェーダでは少ない費用でレンダリングできます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-13shader-SelfIllumSpecular

Self-Illuminated Properties
This shader allows you to define bright and dark parts of the object. The alpha channel of a secondary texture will define areas of the object that "emit" light by themselves, even when no light is shining on it. In the alpha channel, black is zero light, and white is full light emitted by the object. Any scene lights will add illumination on top of the shader's illumination. So even if your object does not emit any light by itself, it will still be lit by lights in your scene.
Specular(鏡面)プロパティ
Specular(鏡面)はDiffuse(拡散)と同様のシンプル(Lambertian)ライティングを使用するのに加えてビューア依存の鏡面ハイライトを計算します。Blinn-Phongライティングモデルと呼ばれます。鏡面のハイライトは、表面の角度、ライトの角度、およびビューアングル、に依存します。ハイライトは実際にはリアルタイム表現向きの、光源からブラーのかかった反射のシミュレーションです。ハイライトのブラーの度合いはInspectorのShininessスライダで制御されます。
これに加えて、メインのテクスチャのアルファチャネルは鏡面マップ(時々"Gloss Map"とも呼ばれます)として動作し、オブジェクトのどの領域が他の部分より反射するか定義します。アルファの黒い部分は鏡面反射がゼロとなり、白い領域は完全な鏡面反射となります。これはオブジェクトの異なるエリアで鏡面の反射レベルを変更したい場合に便利です。例えば、錆びた金属などは低い鏡面性を使用し、磨かれた金属は高い鏡面性を使用します。口紅は肌よりも鏡面性を高く、肌は綿の服よりも鏡面性を高くします。良く出来た鏡面マップはプレイヤーを関心させるのに大きな違いを生みます。
パフォーマンス
一般に、このシェーダでは少ない費用でレンダリングできます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-13shader-SelfIllumBumpedDiffuse

Self-Illuminated Properties
This shader allows you to define bright and dark parts of the object. The alpha channel of a secondary texture will define areas of the object that "emit" light by themselves, even when no light is shining on it. In the alpha channel, black is zero light, and white is full light emitted by the object. Any scene lights will add illumination on top of the shader's illumination. So even if your object does not emit any light by itself, it will still be lit by lights in your scene.
ノーマルマップ プロパティ
Diffuse(拡散)シェーダと同様のシンプル(Lambertian)ライティングを使用します。表面と光がなす角度が小さくなるほどライティングが弱くなります。ライティングは角度のみに依存し、カメラの移動や回転による変更の影響は受けません。
Normal mapping(ノーマルマップ)は細かい表面のディテールを、ポリゴンの量を増やして詳細を削り出すのではなく、テクスチャにより表現します。オブジェクトの形状は変更しないが、Normal Map(ノーマルマップ)と呼ばれる特殊なテクスチャを使用してこのエフェクトを得ます。ノーマルマップでは、各ピクセルのカラー値は表面法線の角度を表します。次にこの値を形状の代りに使用してライティングを行います。オブジェクトのライティングを計算するにあたり、効果的にノーマルマップによってメッシュの形状の影響をオーバーライドします。
ノーマルマップの作成
通常のグレースケールの画像をインポートし、Unityの中でノーマルマップへと変換できます。これを行う方法については[[HOWTO-bumpmap | Normal map FAQ page] を参照下さい。
技術的な詳細
ノーマルマップは接線空間(Tangent Space)を用いたノーマルマップです。接線空間とはモデル形状の"表面に沿った"空間です。この空間において、Z軸は必ず表面から離れる方向を指します。接線空間によるノーマルマップは他の"オブジェクト空間"を用いたノーマルマップと比較すると効果ですが、いくつかの長所があります:
- It's possible to use them on deforming models - the bumps will remain on the deforming surface and will just work.Deforming model(変形モデル)で使用出来ます、Deforming surface(変形平面)のバンプは維持され、正しく機能します。
- モデルの異なる部分にノーマルマップのパーツを再利用することが出来ます、あるいは別のモデルに使用できます。
Diffuse(拡散) プロパティ
Diffuse(拡散)は簡単な (Lambertian(ランバート))ライティングモデルを計算します。表面のライティングは光源との角度が小さくなるにつれ弱まります。ライティングはこの角度のみに依存し、カメラの移動・回転による影響を受けません。
Performance
Generally, this shader is cheap to render. For more details, please view the Shader Peformance page.
Page last updated: 2012-08-23shader-SelfIllumBumpedSpecular

Self-Illuminated Properties
This shader allows you to define bright and dark parts of the object. The alpha channel of a secondary texture will define areas of the object that "emit" light by themselves, even when no light is shining on it. In the alpha channel, black is zero light, and white is full light emitted by the object. Any scene lights will add illumination on top of the shader's illumination. So even if your object does not emit any light by itself, it will still be lit by lights in your scene.
ノーマルマップ プロパティ
Diffuse(拡散)シェーダと同様のシンプル(Lambertian)ライティングを使用します。表面と光がなす角度が小さくなるほどライティングが弱くなります。ライティングは角度のみに依存し、カメラの移動や回転による変更の影響は受けません。
Normal mapping(ノーマルマップ)は細かい表面のディテールを、ポリゴンの量を増やして詳細を削り出すのではなく、テクスチャにより表現します。オブジェクトの形状は変更しないが、Normal Map(ノーマルマップ)と呼ばれる特殊なテクスチャを使用してこのエフェクトを得ます。ノーマルマップでは、各ピクセルのカラー値は表面法線の角度を表します。次にこの値を形状の代りに使用してライティングを行います。オブジェクトのライティングを計算するにあたり、効果的にノーマルマップによってメッシュの形状の影響をオーバーライドします。
ノーマルマップの作成
通常のグレースケールの画像をインポートし、Unityの中でノーマルマップへと変換できます。これを行う方法については[[HOWTO-bumpmap | Normal map FAQ page] を参照下さい。
技術的な詳細
ノーマルマップは接線空間(Tangent Space)を用いたノーマルマップです。接線空間とはモデル形状の"表面に沿った"空間です。この空間において、Z軸は必ず表面から離れる方向を指します。接線空間によるノーマルマップは他の"オブジェクト空間"を用いたノーマルマップと比較すると効果ですが、いくつかの長所があります:
- It's possible to use them on deforming models - the bumps will remain on the deforming surface and will just work.Deforming model(変形モデル)で使用出来ます、Deforming surface(変形平面)のバンプは維持され、正しく機能します。
- モデルの異なる部分にノーマルマップのパーツを再利用することが出来ます、あるいは別のモデルに使用できます。
Specular(鏡面)プロパティ
Specular(鏡面)はDiffuse(拡散)と同様のシンプル(Lambertian)ライティングを使用するのに加えてビューア依存の鏡面ハイライトを計算します。Blinn-Phongライティングモデルと呼ばれます。鏡面のハイライトは、表面の角度、ライトの角度、およびビューアングル、に依存します。ハイライトは実際にはリアルタイム表現向きの、光源からブラーのかかった反射のシミュレーションです。ハイライトのブラーの度合いはInspectorのShininessスライダで制御されます。
これに加えて、メインのテクスチャのアルファチャネルは鏡面マップ(時々"Gloss Map"とも呼ばれます)として動作し、オブジェクトのどの領域が他の部分より反射するか定義します。アルファの黒い部分は鏡面反射がゼロとなり、白い領域は完全な鏡面反射となります。これはオブジェクトの異なるエリアで鏡面の反射レベルを変更したい場合に便利です。例えば、錆びた金属などは低い鏡面性を使用し、磨かれた金属は高い鏡面性を使用します。口紅は肌よりも鏡面性を高く、肌は綿の服よりも鏡面性を高くします。良く出来た鏡面マップはプレイヤーを関心させるのに大きな違いを生みます。
Performance
Generally, this shader is moderately expensive to render. For more details, please view the Shader Peformance page.
Page last updated: 2012-08-23shader-SelfIllumParallaxDiffuse

Self-Illuminated Properties
This shader allows you to define bright and dark parts of the object. The alpha channel of a secondary texture will define areas of the object that "emit" light by themselves, even when no light is shining on it. In the alpha channel, black is zero light, and white is full light emitted by the object. Any scene lights will add illumination on top of the shader's illumination. So even if your object does not emit any light by itself, it will still be lit by lights in your scene.
Parallax Normal mapped プロパティ
Parallax Normal mapped(視差ノーマルマップ)は通常のNormal mappedと同じであるが、デプスをより良くシミュレーションしています。この追加のデプス効果はHeight Map(高低マップ)を使用して得られます。Height Mapはノーマルマップのアルファチャネルに含まれます。アルファにおいて、黒はデプスがゼロで、白はデプスが最大値です。これは煉瓦や石で主に使用され間のクラックをより良く表現します。
Parallax mappingのテクニックは比較的簡単ですが、画像の乱れや異常なエフェクトが発生することがあります。具体的には、Height Mapでの急激な高低さの変化は避けるべきです。InspectorでHeightの値を調整することもオブジェクトの歪みにつながり、不自然で非現実的に見えることがあります。この理由から、Height Mapで穏やかな高低さの変化とすることと、Heightスライドバーを低い側に保つこと、を推奨します。
Diffuse(拡散) プロパティ
Diffuse(拡散)は簡単な (Lambertian(ランバート))ライティングモデルを計算します。表面のライティングは光源との角度が小さくなるにつれ弱まります。ライティングはこの角度のみに依存し、カメラの移動・回転による影響を受けません。
Performance
Generally, this shader is on the more expensive rendering side. For more details, please view the Shader Peformance page.
Page last updated: 2012-08-23shader-SelfIllumParallaxSpecular

Self-Illuminated Properties
This shader allows you to define bright and dark parts of the object. The alpha channel of a secondary texture will define areas of the object that "emit" light by themselves, even when no light is shining on it. In the alpha channel, black is zero light, and white is full light emitted by the object. Any scene lights will add illumination on top of the shader's illumination. So even if your object does not emit any light by itself, it will still be lit by lights in your scene.
Parallax Normal mapped プロパティ
Parallax Normal mapped(視差ノーマルマップ)は通常のNormal mappedと同じであるが、デプスをより良くシミュレーションしています。この追加のデプス効果はHeight Map(高低マップ)を使用して得られます。Height Mapはノーマルマップのアルファチャネルに含まれます。アルファにおいて、黒はデプスがゼロで、白はデプスが最大値です。これは煉瓦や石で主に使用され間のクラックをより良く表現します。
Parallax mappingのテクニックは比較的簡単ですが、画像の乱れや異常なエフェクトが発生することがあります。具体的には、Height Mapでの急激な高低さの変化は避けるべきです。InspectorでHeightの値を調整することもオブジェクトの歪みにつながり、不自然で非現実的に見えることがあります。この理由から、Height Mapで穏やかな高低さの変化とすることと、Heightスライドバーを低い側に保つこと、を推奨します。
Specular(鏡面)プロパティ
Specular(鏡面)はDiffuse(拡散)と同様のシンプル(Lambertian)ライティングを使用するのに加えてビューア依存の鏡面ハイライトを計算します。Blinn-Phongライティングモデルと呼ばれます。鏡面のハイライトは、表面の角度、ライトの角度、およびビューアングル、に依存します。ハイライトは実際にはリアルタイム表現向きの、光源からブラーのかかった反射のシミュレーションです。ハイライトのブラーの度合いはInspectorのShininessスライダで制御されます。
これに加えて、メインのテクスチャのアルファチャネルは鏡面マップ(時々"Gloss Map"とも呼ばれます)として動作し、オブジェクトのどの領域が他の部分より反射するか定義します。アルファの黒い部分は鏡面反射がゼロとなり、白い領域は完全な鏡面反射となります。これはオブジェクトの異なるエリアで鏡面の反射レベルを変更したい場合に便利です。例えば、錆びた金属などは低い鏡面性を使用し、磨かれた金属は高い鏡面性を使用します。口紅は肌よりも鏡面性を高く、肌は綿の服よりも鏡面性を高くします。良く出来た鏡面マップはプレイヤーを関心させるのに大きな違いを生みます。
Performance
Generally, this shader is on the more expensive rendering side. For more details, please view the Shader Peformance page.
Page last updated: 2012-08-23shader-ReflectiveFamily
Reflective shaders will allow you to use a Cubemap which will be reflected on your mesh. You can also define areas of more or less reflectivity on your object through the alpha channel of the Base texture. High relectivity is a great effect for glosses, oils, chrome, etc. Low reflectivity can add effect for metals, liquid surfaces, or video monitors.
Reflective Vertex-Lit
Assets needed:
- One Base texture with alpha channel for defining reflective areas
- One Reflection Cubemap for Reflection Map
Reflective Diffuse
Assets needed:
- One Base texture with alpha channel for defining reflective areas
- One Reflection Cubemap for Reflection Map
Reflective Specular
Assets needed:
- One Base texture with alpha channel for defining reflective areas & Specular Map combined
- One Reflection Cubemap for Reflection Map
Note: One consideration for this shader is that the Base texture's alpha channel will double as both the reflective areas and the Specular Map.
Reflective Normal mapped
Assets needed:
- One Base texture with alpha channel for defining reflective areas
- One Reflection Cubemap for Reflection Map
- One Normal map normal map, no alpha channel required
Reflective Normal Mapped Specular
Assets needed:
- One Base texture with alpha channel for defining reflective areas & Specular Map combined
- One Reflection Cubemap for Reflection Map
- One Normal map normal map, no alpha channel required
Note: One consideration for this shader is that the Base texture's alpha channel will double as both the reflective areas and the Specular Map.
Reflective Parallax
Assets needed:
- One Base texture with alpha channel for defining reflective areas
- One Reflection Cubemap for Reflection Map
- One Normal map normal map, with alpha channel for Parallax Depth
Reflective Parallax Specular
Assets needed:
- One Base texture with alpha channel for defining reflective areas & Specular Map
- One Reflection Cubemap for Reflection Map
- One Normal map normal map, with alpha channel for Parallax Depth
Note: One consideration for this shader is that the Base texture's alpha channel will double as both the reflective areas and the Specular Map.
Reflective Normal mapped Unlit
Assets needed:
- One Base texture with alpha channel for defining reflective areas
- One Reflection Cubemap for Reflection Map
- One Normal map normal map, no alpha channel required
Reflective Normal mapped Vertex-Lit
Assets needed:
- One Base texture with alpha channel for defining reflective areas
- One Reflection Cubemap for Reflection Map
- One Normal map normal map, no alpha channel required
shader-ReflectiveVertexLit

Reflective プロパティ
このシェーダは車や、金属の物体などの反射表面をシミュレートします。具体的に何が反射するのかを定義する、Environment Cubemap(環境キューブマップ)が必要です。メインのテクスチャのアルファチャネルはオブジェクトの表面での反射の強度を定義します。シーンにおけるライトはすでに反射されているものの上に追加で照らされます。
Vertex-Litプロパティ
このシェーダはVertex-Lit(頂点ライト)、でありもっともシンプルなシェーダのひとつです。これに照らされるライトはひとつのパスでレンダリングされ頂点のみで計算されます。
Vertex-Lit(頂点ライト)であるため、light cookie、法線マッピング、シャドウ(影)といった、ピクセルベースのレンダリングエフェクトは描画されません。このシェーダはモデルのテッセレーションに敏感です。もしこのシェーダを使ったキューブのきわめて近くにポイントライトを置いた場合、ライトは頂点のみで計算されます。Pixel-litシェーダは、テッセレーションと独立して、丸いハイライトを作成するのに効果を発揮します。望む効果がそれであれば、むしろPixel-litシェーダを使用するかオブジェクトのテッセレーションを増やすことを考慮すべきです。
パフォーマンス
一般に、このシェーダでは少ない費用でレンダリングできます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-13shader-ReflectiveDiffuse

Reflective プロパティ
このシェーダは車や、金属の物体などの反射表面をシミュレートします。具体的に何が反射するのかを定義する、Environment Cubemap(環境キューブマップ)が必要です。メインのテクスチャのアルファチャネルはオブジェクトの表面での反射の強度を定義します。シーンにおけるライトはすでに反射されているものの上に追加で照らされます。
Diffuse(拡散) プロパティ
Diffuse(拡散)は簡単な (Lambertian(ランバート))ライティングモデルを計算します。表面のライティングは光源との角度が小さくなるにつれ弱まります。ライティングはこの角度のみに依存し、カメラの移動・回転による影響を受けません。
パフォーマンス
一般に、このシェーダでは少ない費用でレンダリングできます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-13shader-ReflectiveSpecular

One consideration for this shader is that the Base texture's alpha channel will double as both the Reflection Map and the Specular Map.
Reflective プロパティ
このシェーダは車や、金属の物体などの反射表面をシミュレートします。具体的に何が反射するのかを定義する、Environment Cubemap(環境キューブマップ)が必要です。メインのテクスチャのアルファチャネルはオブジェクトの表面での反射の強度を定義します。シーンにおけるライトはすでに反射されているものの上に追加で照らされます。
Specular(鏡面)プロパティ
Specular(鏡面)はDiffuse(拡散)と同様のシンプル(Lambertian)ライティングを使用するのに加えてビューア依存の鏡面ハイライトを計算します。Blinn-Phongライティングモデルと呼ばれます。鏡面のハイライトは、表面の角度、ライトの角度、およびビューアングル、に依存します。ハイライトは実際にはリアルタイム表現向きの、光源からブラーのかかった反射のシミュレーションです。ハイライトのブラーの度合いはInspectorのShininessスライダで制御されます。
これに加えて、メインのテクスチャのアルファチャネルは鏡面マップ(時々"Gloss Map"とも呼ばれます)として動作し、オブジェクトのどの領域が他の部分より反射するか定義します。アルファの黒い部分は鏡面反射がゼロとなり、白い領域は完全な鏡面反射となります。これはオブジェクトの異なるエリアで鏡面の反射レベルを変更したい場合に便利です。例えば、錆びた金属などは低い鏡面性を使用し、磨かれた金属は高い鏡面性を使用します。口紅は肌よりも鏡面性を高く、肌は綿の服よりも鏡面性を高くします。良く出来た鏡面マップはプレイヤーを関心させるのに大きな違いを生みます。
Performance
Generally, this shader is moderately expensive to render. For more details, please view the Shader Peformance page.
Page last updated: 2007-05-08shader-ReflectiveBumpedDiffuse

Reflective プロパティ
このシェーダは車や、金属の物体などの反射表面をシミュレートします。具体的に何が反射するのかを定義する、Environment Cubemap(環境キューブマップ)が必要です。メインのテクスチャのアルファチャネルはオブジェクトの表面での反射の強度を定義します。シーンにおけるライトはすでに反射されているものの上に追加で照らされます。
ノーマルマップ プロパティ
Diffuse(拡散)シェーダと同様のシンプル(Lambertian)ライティングを使用します。表面と光がなす角度が小さくなるほどライティングが弱くなります。ライティングは角度のみに依存し、カメラの移動や回転による変更の影響は受けません。
Normal mapping(ノーマルマップ)は細かい表面のディテールを、ポリゴンの量を増やして詳細を削り出すのではなく、テクスチャにより表現します。オブジェクトの形状は変更しないが、Normal Map(ノーマルマップ)と呼ばれる特殊なテクスチャを使用してこのエフェクトを得ます。ノーマルマップでは、各ピクセルのカラー値は表面法線の角度を表します。次にこの値を形状の代りに使用してライティングを行います。オブジェクトのライティングを計算するにあたり、効果的にノーマルマップによってメッシュの形状の影響をオーバーライドします。
ノーマルマップの作成
通常のグレースケールの画像をインポートし、Unityの中でノーマルマップへと変換できます。これを行う方法については[[HOWTO-bumpmap | Normal map FAQ page] を参照下さい。
技術的な詳細
ノーマルマップは接線空間(Tangent Space)を用いたノーマルマップです。接線空間とはモデル形状の"表面に沿った"空間です。この空間において、Z軸は必ず表面から離れる方向を指します。接線空間によるノーマルマップは他の"オブジェクト空間"を用いたノーマルマップと比較すると効果ですが、いくつかの長所があります:
- It's possible to use them on deforming models - the bumps will remain on the deforming surface and will just work.Deforming model(変形モデル)で使用出来ます、Deforming surface(変形平面)のバンプは維持され、正しく機能します。
- モデルの異なる部分にノーマルマップのパーツを再利用することが出来ます、あるいは別のモデルに使用できます。
Diffuse(拡散) プロパティ
Diffuse(拡散)は簡単な (Lambertian(ランバート))ライティングモデルを計算します。表面のライティングは光源との角度が小さくなるにつれ弱まります。ライティングはこの角度のみに依存し、カメラの移動・回転による影響を受けません。
パフォーマンス
一般に、このシェーダでは少ない費用でレンダリングできます。 詳細については、Shader Peformance page を参照してください。
Page last updated: 2012-11-13shader-ReflectiveBumpedSpecular

One consideration for this shader is that the Base texture's alpha channel will double as both the Reflection Map and the Specular Map.
Reflective プロパティ
このシェーダは車や、金属の物体などの反射表面をシミュレートします。具体的に何が反射するのかを定義する、Environment Cubemap(環境キューブマップ)が必要です。メインのテクスチャのアルファチャネルはオブジェクトの表面での反射の強度を定義します。シーンにおけるライトはすでに反射されているものの上に追加で照らされます。
ノーマルマップ プロパティ
Diffuse(拡散)シェーダと同様のシンプル(Lambertian)ライティングを使用します。表面と光がなす角度が小さくなるほどライティングが弱くなります。ライティングは角度のみに依存し、カメラの移動や回転による変更の影響は受けません。
Normal mapping(ノーマルマップ)は細かい表面のディテールを、ポリゴンの量を増やして詳細を削り出すのではなく、テクスチャにより表現します。オブジェクトの形状は変更しないが、Normal Map(ノーマルマップ)と呼ばれる特殊なテクスチャを使用してこのエフェクトを得ます。ノーマルマップでは、各ピクセルのカラー値は表面法線の角度を表します。次にこの値を形状の代りに使用してライティングを行います。オブジェクトのライティングを計算するにあたり、効果的にノーマルマップによってメッシュの形状の影響をオーバーライドします。
ノーマルマップの作成
通常のグレースケールの画像をインポートし、Unityの中でノーマルマップへと変換できます。これを行う方法については[[HOWTO-bumpmap | Normal map FAQ page] を参照下さい。
技術的な詳細
ノーマルマップは接線空間(Tangent Space)を用いたノーマルマップです。接線空間とはモデル形状の"表面に沿った"空間です。この空間において、Z軸は必ず表面から離れる方向を指します。接線空間によるノーマルマップは他の"オブジェクト空間"を用いたノーマルマップと比較すると効果ですが、いくつかの長所があります:
- It's possible to use them on deforming models - the bumps will remain on the deforming surface and will just work.Deforming model(変形モデル)で使用出来ます、Deforming surface(変形平面)のバンプは維持され、正しく機能します。
- モデルの異なる部分にノーマルマップのパーツを再利用することが出来ます、あるいは別のモデルに使用できます。
Specular(鏡面)プロパティ
Specular(鏡面)はDiffuse(拡散)と同様のシンプル(Lambertian)ライティングを使用するのに加えてビューア依存の鏡面ハイライトを計算します。Blinn-Phongライティングモデルと呼ばれます。鏡面のハイライトは、表面の角度、ライトの角度、およびビューアングル、に依存します。ハイライトは実際にはリアルタイム表現向きの、光源からブラーのかかった反射のシミュレーションです。ハイライトのブラーの度合いはInspectorのShininessスライダで制御されます。
これに加えて、メインのテクスチャのアルファチャネルは鏡面マップ(時々"Gloss Map"とも呼ばれます)として動作し、オブジェクトのどの領域が他の部分より反射するか定義します。アルファの黒い部分は鏡面反射がゼロとなり、白い領域は完全な鏡面反射となります。これはオブジェクトの異なるエリアで鏡面の反射レベルを変更したい場合に便利です。例えば、錆びた金属などは低い鏡面性を使用し、磨かれた金属は高い鏡面性を使用します。口紅は肌よりも鏡面性を高く、肌は綿の服よりも鏡面性を高くします。良く出来た鏡面マップはプレイヤーを関心させるのに大きな違いを生みます。
Performance
Generally, this shader is moderately expensive to render. For more details, please view the Shader Peformance page.
Page last updated: 2010-07-14shader-ReflectiveParallaxDiffuse

Reflective プロパティ
このシェーダは車や、金属の物体などの反射表面をシミュレートします。具体的に何が反射するのかを定義する、Environment Cubemap(環境キューブマップ)が必要です。メインのテクスチャのアルファチャネルはオブジェクトの表面での反射の強度を定義します。シーンにおけるライトはすでに反射されているものの上に追加で照らされます。
Parallax Normal mapped プロパティ
Parallax Normal mapped(視差ノーマルマップ)は通常のNormal mappedと同じであるが、デプスをより良くシミュレーションしています。この追加のデプス効果はHeight Map(高低マップ)を使用して得られます。Height Mapはノーマルマップのアルファチャネルに含まれます。アルファにおいて、黒はデプスがゼロで、白はデプスが最大値です。これは煉瓦や石で主に使用され間のクラックをより良く表現します。
Parallax mappingのテクニックは比較的簡単ですが、画像の乱れや異常なエフェクトが発生することがあります。具体的には、Height Mapでの急激な高低さの変化は避けるべきです。InspectorでHeightの値を調整することもオブジェクトの歪みにつながり、不自然で非現実的に見えることがあります。この理由から、Height Mapで穏やかな高低さの変化とすることと、Heightスライドバーを低い側に保つこと、を推奨します。
Diffuse(拡散) プロパティ
Diffuse(拡散)は簡単な (Lambertian(ランバート))ライティングモデルを計算します。表面のライティングは光源との角度が小さくなるにつれ弱まります。ライティングはこの角度のみに依存し、カメラの移動・回転による影響を受けません。
Performance
Generally, this shader is on the more expensive rendering side. For more details, please view the Shader Peformance page.
Page last updated: 2007-05-08shader-ReflectiveParallaxSpecular

このシェーダの考慮点は、ベーステクスチャのアルファチャネルが(Reflection Map)反射マップと(Specular Map)鏡面マップの両方として2倍になることです。
Reflective プロパティ
このシェーダは車や、金属の物体などの反射表面をシミュレートします。具体的に何が反射するのかを定義する、Environment Cubemap(環境キューブマップ)が必要です。メインのテクスチャのアルファチャネルはオブジェクトの表面での反射の強度を定義します。シーンにおけるライトはすでに反射されているものの上に追加で照らされます。
Parallax Normal mapped プロパティ
Parallax Normal mapped(視差ノーマルマップ)は通常のNormal mappedと同じであるが、デプスをより良くシミュレーションしています。この追加のデプス効果はHeight Map(高低マップ)を使用して得られます。Height Mapはノーマルマップのアルファチャネルに含まれます。アルファにおいて、黒はデプスがゼロで、白はデプスが最大値です。これは煉瓦や石で主に使用され間のクラックをより良く表現します。
Parallax mappingのテクニックは比較的簡単ですが、画像の乱れや異常なエフェクトが発生することがあります。具体的には、Height Mapでの急激な高低さの変化は避けるべきです。InspectorでHeightの値を調整することもオブジェクトの歪みにつながり、不自然で非現実的に見えることがあります。この理由から、Height Mapで穏やかな高低さの変化とすることと、Heightスライドバーを低い側に保つこと、を推奨します。
Specular(鏡面)プロパティ
Specular(鏡面)はDiffuse(拡散)と同様のシンプル(Lambertian)ライティングを使用するのに加えてビューア依存の鏡面ハイライトを計算します。Blinn-Phongライティングモデルと呼ばれます。鏡面のハイライトは、表面の角度、ライトの角度、およびビューアングル、に依存します。ハイライトは実際にはリアルタイム表現向きの、光源からブラーのかかった反射のシミュレーションです。ハイライトのブラーの度合いはInspectorのShininessスライダで制御されます。
これに加えて、メインのテクスチャのアルファチャネルは鏡面マップ(時々"Gloss Map"とも呼ばれます)として動作し、オブジェクトのどの領域が他の部分より反射するか定義します。アルファの黒い部分は鏡面反射がゼロとなり、白い領域は完全な鏡面反射となります。これはオブジェクトの異なるエリアで鏡面の反射レベルを変更したい場合に便利です。例えば、錆びた金属などは低い鏡面性を使用し、磨かれた金属は高い鏡面性を使用します。口紅は肌よりも鏡面性を高く、肌は綿の服よりも鏡面性を高くします。良く出来た鏡面マップはプレイヤーを関心させるのに大きな違いを生みます。
パフォーマンス
一般に、このシェーダはよりレンダリングが高価です。詳細についてはShader Peformance page を参照下さい。
Page last updated: 2012-11-30shader-ReflectiveBumpedUnlit

Reflective プロパティ
このシェーダは車や、金属の物体などの反射表面をシミュレートします。具体的に何が反射するのかを定義する、Environment Cubemap(環境キューブマップ)が必要です。メインのテクスチャのアルファチャネルはオブジェクトの表面での反射の強度を定義します。シーンにおけるライトはすでに反射されているものの上に追加で照らされます。
Normal mapped Properties
This shader does not use normal-mapping in the traditional way. The normal map does not affect any lights shining on the object, because the shader does not use lights at all. The normal map will only distort the reflection map.
Special Properties
This shader is special because it does not respond to lights at all, so you don't have to worry about performance reduction from use of multiple lights. It simply displays the reflection cube map on the model. The reflection is distorted by the normal map, so you get the benefit of detailed reflection. Because it does not respond to lights, it is quite cheap. It is somewhat of a specialized use case, but in those cases it does exactly what you want as cheaply as possible.
Performance
Generally, this shader is quite cheap to render. For more details, please view the Shader Peformance page.
Page last updated: 2010-07-14shader-ReflectiveBumpedVertexLit

Reflective プロパティ
このシェーダは車や、金属の物体などの反射表面をシミュレートします。具体的に何が反射するのかを定義する、Environment Cubemap(環境キューブマップ)が必要です。メインのテクスチャのアルファチャネルはオブジェクトの表面での反射の強度を定義します。シーンにおけるライトはすでに反射されているものの上に追加で照らされます。
Vertex-Litプロパティ
このシェーダはVertex-Lit(頂点ライト)、でありもっともシンプルなシェーダのひとつです。これに照らされるライトはひとつのパスでレンダリングされ頂点のみで計算されます。
Vertex-Lit(頂点ライト)であるため、light cookie、法線マッピング、シャドウ(影)といった、ピクセルベースのレンダリングエフェクトは描画されません。このシェーダはモデルのテッセレーションに敏感です。もしこのシェーダを使ったキューブのきわめて近くにポイントライトを置いた場合、ライトは頂点のみで計算されます。Pixel-litシェーダは、テッセレーションと独立して、丸いハイライトを作成するのに効果を発揮します。望む効果がそれであれば、むしろPixel-litシェーダを使用するかオブジェクトのテッセレーションを増やすことを考慮すべきです。
Special Properties
This shader is a good alternative to Reflective Normal mapped. If you do not need the object itself to be affected by pixel lights, but do want the reflection to be affected by a normal map, this shader is for you. This shader is vertex-lit, so it is rendered more quickly than its Reflective Normal mapped counterpart.
Performance
Generally, this shader is not expensive to render. For more details, please view the Shader Peformance page.
Page last updated: 2010-07-14Rendering-Tech
本項では、Unity のレンダリング エンジンの様々な側面の裏にある技術的詳細について説明します。
- Deferred Lighting Rendering Path
- Forward Rendering Path Details
- Vertex Litレンダリングパス詳細
- Unity のグラフィック機能のハードウェア要件
RenderTech-DeferredLighting
This page details the Deferred Lighting rendering path. See this article for a technical overview of deferred lighting.
The Deferred Lighting rendering path is the one with the highest lighting and shadow fidelity. There is no limit on the number of lights that can affect an object and all lights are evaluated per-pixel, which means that they all interact correctly with normal maps, etc. Additionally, all lights can have cookies and shadows.
Deferred lighting has the advantage that the processing overhead of lighting is proportional to the size of the light onscreen, no matter how many objects it illuminates. Therefore, performance can be improved by keeping lights small. Deferred lighting also has highly consistent and predictable behaviour. The effect of each light is computed per-pixel, so there are no lighting computations that break down on large triangles etc.
On the downside, deferred lighting has no real support for anti-aliasing and can't handle semi-transparent objects (these must be rendered using Forward Rendering). There is also no support for the Mesh Renderer's Receive Shadows flag and culling masks are only supported in a limited way.
Requirements
Deferred lighting is only available in Unity Pro. It requires a graphics card with Shader Model 3.0 (or later), support for Depth render textures and two-sided stencil buffers. Most graphics cards made after 2004 support deferred lighting, including GeForce FX and later, Radeon X1300 and later, Intel 965 / GMA X3100 and later. However, it is not currently available on mobile platforms nor Flash.
Performance Considerations
The rendering overhead of realtime lights in deferred lighting is proportional to the number of pixels illuminated by the light and not dependent on scene complexity. So small point or spot lights are very cheap to render and if they are fully or partially occluded by scene objects then they are even cheaper.
Of course, lights with shadows are much more expensive than lights without shadows. In deferred lighting, shadow-casting objects still need to be rendered once or more for each shadow-casting light. Furthermore, the lighting shader that applies shadows has a higher rendering overhead than the one used when shadows are disabled.
Implementation Details
When Deferred Lighting is used, the rendering process in Unity happens in three passes:-
- Base Pass: objects are rendered to produce screen-space buffers with depth, normals, and specular power.
- Lighting pass: the previously generated buffers are used to compute lighting into another screen-space buffer.
- Final pass: objects are rendered again. They fetch the computed lighting, combine it with color textures and add any ambient/emissive lighting.
Objects with shaders that can't handle deferred lighting are rendered after this process is complete, using the forward rendering path.
Base Pass
The base pass renders each object once. View space normals and specular power are rendered into a single ARGB32 Render Texture (with normals in RGB channels and specular power in A). If the platform and hardware allow the Z buffer to be read as a texture then depth is not explicitly rendered. If the Z buffer can't be accessed as a texture then depth is rendered in an additional rendering pass using shader replacement.
The result of the base pass is a Z buffer filled with the scene contents and a Render Texture with normals and specular power.
Lighting Pass
The lighting pass computes lighting based on depth, normals and specular power. Lighting is computed in screen space, so the time it takes to process is independent of scene complexity. The lighting buffer is a single ARGB32 Render Texture, with diffuse lighting in the RGB channels and monochrome specular lighting in the A channel. Lighting values are logarithmically encoded to provide greater dynamic range than is usually possible with an ARGB32 texture. The only lighting model available with deferred rendering is Blinn-Phong.
Point and spot lights that do not cross the camera's near plane are rendered as 3D shapes, with the Z buffer's test against the scene enabled. This makes partially or fully occluded point and spot lights very cheap to render. Directional lights and point/spot lights that cross the near plane are rendered as fullscreen quads.
If a light has shadows enabled then they are also rendered and applied in this pass. Note that shadows do not come for "free"; shadow casters need to be rendered and a more complex light shader must be applied.
The only lighting model available is Blinn-Phong. If a different model is wanted you can modify the lighting pass shader, by placing the modified version of the Internal-PrePassLighting.shader file from the Built-in shaders into a folder named "Resources" in your "Assets" folder.
Final Pass
The final pass produces the final rendered image. Here all objects are rendered again with shaders that fetch the lighting, combine it with textures and add any emissive lighting. Lightmaps are also applied in the final pass. Close to the camera, realtime lighting is used, and only baked indirect lighting is added. This crossfades into fully baked lighting further away from the camera.
Page last updated: 2012-08-18RenderTech-ForwardRendering
This page describes details of Forward rendering path.
Forward Rendering path renders each object in one or more passes, depending on lights that affect the object. Lights themselves are also treated differently by Forward Rendering, depending on their settings and intensity.
Implementation Details
In Forward Rendering, some number of brightest lights that affect each object are rendered in fully per-pixel lit mode. Then, up to 4 point lights are calculated per-vertex. The other lights are computed as Spherical Harmonics (SH), which is much faster but is only an approximation. Whether a light will be per-pixel light or not is dependent on this:
- Lights that have their Render Mode set to Not Important are always per-vertex or SH.
- Brightest directional light is always per-pixel.
- Lights that have their Render Mode set to Important are always per-pixel.
- If the above results in less lights than current Pixel Light Count Quality Setting, then more lights are rendered per-pixel, in order of decreasing brightness.
Rendering of each object happens as follows:
- Base Pass applies one per-pixel directional light and all per-vertex/SH lights.
- Other per-pixel lights are rendered in additional passes, one pass for each light.
For example, if there is some object that's affected by a number of lights (a circle in a picture below, affected by lights A to H):
Let's assume lights A to H have the same color & intensity, all all of them have Auto rendering mode, so they would be sorted in exactly this order for this object. The brightest lights will be rendered in per-pixel lit mode (A to D), then up to 4 lights in per-vertex lit mode (D to G), and finally the rest of lights in SH (G to H):

Note that light groups overlap; for example last per-pixel light blends into per-vertex lit mode so there are less "light popping" as objects and lights move around.
Base Pass
Base pass renders object with one per-pixel directional light and all SH lights. This pass also adds any lightmaps, ambient and emissive lighting from the shader. Directional light rendered in this pass can have Shadows. Note that Lightmapped objects do not get illumination from SH lights.
Additional Passes
Additional passes are rendered for each additional per-pixel light that affect this object. Lights in these passes can't have shadows (so in result, Forward Rendering supports one directional light with shadows).
Performance Considerations
Spherical Harmonics lights are very fast to render. They have a tiny cost on the CPU, and are actually free for the GPU to apply (that is, base pass always computes SH lighting; but due to the way SH lights work, the cost is exactly the same no matter how many SH lights are there).
The downsides of SH lights are:
- They are computed at object's vertices, not pixels. This means they do not support light Cookies or normal maps.
- SH lighting is very low frequency. You can't have sharp lighting transitions with SH lights. They are also only affecting the diffuse lighting (too low frequency for specular highlights).
- SH lighting is is not local; point or spot SH lights close to some surface will "look wrong".
In summary, SH lights are often good enough for small dynamic objects.
Page last updated: 2010-07-08RenderTech-VertexLit
このページはVertex Lit rendering path (レンダリングパス)について説明します。
Vertex Litパスは通常ひとつのパスで各オブジェクトをレンダリングし、全てのライトからのライティングをオブジェクトの頂点にて計算します。
最も早いレンダリングパスであり、最も幅広いハードウェアサポートがあります。(但し据え置き機で動作しないため留意すること)
頂点レベルで全ての計算が行われるため、このレンダリングパスは殆どのピクセルエフェクトをサポートしません。shadows、normal mapping(法線マップ)、light cookies、highly detailed specular highlightsをサポートしません。
Page last updated: 2012-11-20RenderTech-HardwareRequirements
要約
| PC/Mac | iOS/Android | Flash | 360/PS3 | |
| 遅延ライティング | SM3.0, GPU サポート | - | - | はい |
| フォワード レンダリング | SM2.0 | OpenGL ES 2.0 | はい | はい |
| 頂点リット レンダリング | はい | はい | はい | - |
| リアルタイム ライティング | SM2.0, GPU サポート | - | 一部 | はい |
| 画像効果 | 最も必要 SM2.0 | 最も必要 OpenGL ES 2.0 | 一部 | はい |
| 頂点シェーダ | SM1.1 | OpenGL ES 2.0 | はい | はい |
| ピクセル シェーダ | SM2.0 | OpenGL ES 2.0 | はい | はい |
| 固定機能シェーダ | はい | はい | はい | - |
リアルタイム シェーダ
リアルタイム シャドーは現在、デスクトップとコンソール プラットフォームで機能します。 デスクトップでは、一般に シェーダ モデル 2.0 capable GPU が必要です。 Windows (Direct3D) 上でも、GPU は、影マッピング機能をサポートする必要があり、大半の ディスクリート GPU は、2003 年以降、大半の統合 GPU は、2007 年以降サポートしています。技術的には、Direct3D 9 では、GPU は D16/D24X8 or DF16/DF24 テクスチャ形式をサポートする必要があり、OpenGLでは、GL_ARB_depth_texture 拡張子をサポートする必要があります。
Flashでもリアルタイムシャドウはサポートされますが、深度バイアス(セルフシャドウが可能になる)の欠如とシェーダー制限(エッジが多少ガタガタになる等)があります。
モバイルシャドウ(iOS/Android)はOpenGL ES 2.0と GL_OES_depth_texture 拡張が必要です。特にこの拡張はTegraのAndroidでバイスでは利用出来ないことに注意して下さい。ですので、影も表示出来ません。
画像効果
Image Effects は、render-to-texture 機能を必要としてます。これは、一般にこのミレニアムで作成されたものならどれにおいてもサポートされています。 しかし、最も簡単な効果を除きすべてが、プログラム可能なピクセル シェーダを必要としているため、すべての実践的な目的のために、デスクトップでは、シェーダ モデル 2.0 (2003 年以降のディスクリート GPU 、2005 年以降の統合 GPU)、携帯プラットフォームでは、OpenGL ES 2.0 を必要としています。
Some image effects work on Flash, but quite a lot of them do not; either due to no support for non-power-of-two textures, shader limitations or lacking features like depth texture support.
シェーダ
Unity では、固定機能またはプログラム可能なシェーダを記述できます。 固定機能は、コンソール (Xbox 360 と Playstation 3) 以外のどこででもサポートされています。 プログラム可能なシェーダは、デフォルトの Model 2.0 (デスクトップ) および OpenGL ES 2.0 (携帯) になります。 デスクトップ プラットフォームでは、頂点シェーダに対して、シェーダ モデル 1.1 を対象とできます。
Page last updated: 2012-11-26SL-Reference
シェーダ リファレンス
Unity のシェーダは、次の 3 つの方法のいずれかで記述できます。
- surface shaders として
- vertex shders およびfragment shaders として
- fixed function shaders として
ニーズに応じた適切なタイプを選ぶには、shader tutorial を参照してください。
選択したタイプに関係なく、シェーダ コードの実際の本体は、ShaderLab と呼ばれる言語で常にラップされます。これは、シェーダ構造を整理するのに使用されます。 以下のような感じになります。
Shader "MyShader" {
Properties {
_MyTexture ("Texture", 2D) = "white" {}
// 色やベクトルなどのその他のプロパティはここでもうまく行きます。
}
SubShader {
// ここの中身は
// - surface shaderまたは、
// - vertex shders および fragment shaders、または、
// - fixed function shaders になります
}
SubShader {
// より古いグラフィック カードで実行できる、上記サブシェーダのもっと簡単なものを始めましょう。
}
}
最初に下記のセクションで ShaderLab 構文の基本的なコンセプトを読んでから、他のセクションでsurface shaderまたはvertex shders および fragment shadersに移ることをお勧めします。 fixed function shadersは ShaderLab のみを用いて記述されているため、ShaderLab のリファレンスに詳細が記載されています。
下記のリファレンスには、各種シェーダのサンプルが多く含まれています。 特に、surface shaderのサンプルについては、Resources section から、Unity の組み込みシェーダのソースを利用できます。 Unity の Image Effects パッケージには、多くの興味深いvertex shders および fragment shadersが含まれています。
シェーダ リファレンスを読んでから、shader tutorial を参照してください。
- Surface Shadersの記述
- 頂点シェーダとFragmentシェーダのプログラミング
- ShaderLab syntax: Shader
- 詳細な ShaderLab トピック
- ShaderLab builtin values
SL-SurfaceShaders
ライティングと相互作用するシェーダの記述は複雑です。 各種ライト、各種シャドウ オプション、各種レンダリング パス (フォワードおよび遅延レンダリング) があり、シェーダはその複雑さを処理する必要があります。
Unity の Surface Shaders は、低いレベルの vertex/pixel shader programs を使用した場合よりも、lit シェーダの記述を、コード生成によって遥かに簡単にする手法です。 Surface Shadersにはカスタム言語はありませんし、魔法や忍術というわけでもありません。手動で記述する必要なる繰り返しコードをすべて生成しているだけです。 それでも、Cg/HLSL でシェーダ コードを記述します。
サンプルが必要な場合は、Surface Shader Examples および Surface Shader Custom Lighting Examples を参照してください。
どのように機能するか
まずは入力用のUVや他の必要なデータと、出力用の構造体 SurfaceOutput が記述されている surface functionを定義します。 SurfaceOutput は基本的に、表面のプロパティを記述します (アルベド、色、法線など)。 Cg/HLSL で、このコードを記述します。
Surface Shaders コンパイラは、どんな入力が必要で、どんな出力を出すかということを決定し、vertex&pixel shaders を生成します。もちろん、Forward レンダリングやDeferred レンダリングを処理するレンダリングパスもここで生成します。
Surface Shadersが出力する標準的な構造体は次のようになります。
struct SurfaceOutput {
half3 Albedo;
half3 Normal;
half3 Emission;
half Specular;
half Gloss;
half Alpha;
};
サンプル
Surface Shader Examples 、 Surface Shader Custom Lighting Examples 、 Surface Shader Tessellation ページを参照して下さい.
Surface Shaders コンパイル 命令
その他のシェーダ同様、Surface Shadersは CGPROGRAM..ENDCG ブロック内に置かれます。 違いは以下のようになります。
- Pass ではなく、SubShader ブロック内に配置される必要があります。 Surface Shadersは複数のパスでコンパイルされます。
#pragma surface ...命令を使用して、これがSurface Shadersであることを示しています。
#pragma surface 命令は次のようになります。
#pragma surface surfaceFunction lightModel [optionalparams]
必要なパラメータ:
- surfaceFunction - Cg で記述されたSurface Shaders の関数です。 これは、
void surf (Input IN、inout SurfaceOutput o)という関数を持つことを意味します。ここのInputは前もって定義した構造体になります。 Inputは、surface 関数が必要とするテクスチャ座標と追加の自動変数を含む必要があります。 - lightModel - 使用するライティング モデル。 組み込みライティング モデルは、
Lambert(デフューズ) とBlinnPhong(スペキュラ) です。 自前のライティング モデルを記述する方法に関しては、Custom Lighting Models ページを参照してください。
オプションのパラメータ:
alpha- アルファ ブレンディング モード。 半透明のシェーダに使用します。alphatest:VariableName- アルファ テスト モード。 透明カットアウトシェーダに使用します。 カットオフ値は、変数名がVariableName のfloat 値になります。vertex:VertexFunction- カスタムの頂点編集関数。 例については、Tree Barkシェーダを参照。finalcolor:ColorFunction- カスタムの最終色モディファイア。詳しくは Surface Shader Examples を参照して下さい。exclude_path:prepassorexclude_path:forward- 所定のレンダリング パスにパスを生成しません。addshadow- シャドウ キャスタおよびコレクタ パスを追加します。 通常、カスタム頂点編集で使用されるので、シャドウ キャスティングも手続き的頂点アニメーションを取得します。dualforward- forward レンダリング パスで、dual lightmaps を使用します。fullforwardshadows- Forward レンダリング パスで、すべてのタイプのシャドウをサポートします。decal:add- 追加のデカール シェーダ (例: 地形 AddPass)。decal:blend- 半透明のデカール シェーダ。softvegetation- Soft Vegetation をオンにすると、Surface Shadersのみレンダリングするようにします。noambient- アンビエントや球面調和関数ライトを適用しません。novertexlights- Forward レンダリングで、球面調和関数または頂点ごとのライトを適用しません。nolightmap- このシェーダでライトマップのサポートを無効にします(シェーダをより小さくします)。nodirlightmap- このシェーダでディレクショナルライトマップを向こうにします(シェーダをより小さくします)。noforwardadd- Forward レンダリング追加パスを無効にします。 これにより、シェーダは完全なディレクショナル ライトサポートし、その他すべてのライトは、頂点ごと/SH で計算されます。シェーダもより小さくなります。approxview- 標準化されたビュー方向をピクセルごとではなく、頂点ごとに計算します。これを必要としているシェーダに行われます。 これは、より速いですが、カメラが表面に近づくと、ビュー方向は全体的に正しくありません。halfasview- 半分方向のベクトルを、ビュー方向ではなくmライティング関数に渡します。 半距離は頂点ごとに計算および標準化されます。 これはより高速ですが、全体的に正しくりません。tessellate:TessFunction- DX11 GPUテッセレーションを使います。詳しくは Surface Shader Tessellation をご覧ください。
また、CGPROGRAM ブロック内に #pragma debug を記述することで、Surface コンパイラは、生成されたコードの中に多くのコメントを付加できます。 シェーダのインスペクタで「Open Compiled Shader」をすることで閲覧できます。
Surface Shaders入力構造
入力の構造体 Input には一般に、シェーダによって必要とされるテクスチャ座標があります。 テクスチャ座標の名前は、uvの後にテクスチャ名が来る形にする必要があります (第 2 のテクスチャ座標セットを使用するには、uv2で始めます)。
以下の値が入力構造に入力できます。
float3 viewDir- ビュー方向を含みます。視差効果、リム ライティングなどの計算に使用されます。float4withCOLORセマンティック - 補間された頂点ごとの色を含みます。float4 screenPos- 反射効果の画面空間位置を含みます。 例えば、Dark Unity では、WetStreet シェーダによって使用されます。float3 worldPos- 世界空間の位置を含みます。float3 worldRefl- Surface Shadersが o.Normal に書き込まない場合の世界反射ベクトルを含みます。 例については、反射 - デフューズ シェーダを参照。float3 worldNormal- Surface Shadersが o.Normal に書き込まない場合の世界法線ベクトルを含みます。float3 worldRefl; INTERNAL_DATA- Surface Shadersが o.Normal に書き込む場合の世界反射ベクトルを含みます。 ピクセルごとの法線マップに基づいて、反射ベクトルを取得するには、WorldReflectionVector (IN, o.Normal)を使用します。 例については、反射 - バンプ型 シェーダを参照。float3 worldNormal; INTERNAL_DATA- Surface Shadersが o.Normal に書き込む場合の世界反射ベクトルを含みます。 ピクセルごとの法線マップに基づいて、法線ベクトルを取得するには、WorldNormalVector (IN, o.Normal)を使用します。
参考文献
Page last updated: 2012-11-20SL-SurfaceShaderExamples
Surface Shaders の例です。 以下の例は、組み込みライティング モデルの使用に集中しています。カスタムのライティング モデルの実装方法については、Surface Shader Lighting Examples にあります。
簡単
非常に基本的なシェーダとその作成から始めましょう。 以下は、表面の色を白にするだけのシェーダです。 これは、組み込み Lambert (デフューズ) ライティング モデルを使用します。
Shader "Example/Diffuse Simple" {
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float4 color : COLOR;
};
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = 1;
}
ENDCG
}
Fallback "Diffuse"
}
以下は、2 つの lights が設定されたモデルでどのように見えるかです。
テクスチャ
真っ白なオブジェクトはかなりつまらないので、テクスチャを追加しましょう。 Properties block をこのシェーダに追加するので、マテリアルからテクスチャ セレクタを取得します。 その他の変更は以下に太字で記載されています。
Shader "Example/Diffuse Texture" {
Properties {
_MainTex ("Texture", 2D) = "white" {}
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float2 uv_MainTex;
};
sampler2D _MainTex;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
}
ENDCG
}
Fallback "Diffuse"
}

法線マッピング
法線マッピングを追加しましょう。
Shader "Example/Diffuse Bump" {
Properties {
_MainTex ("Texture", 2D) = "white" { }
_BumpMap ("Bumpmap", 2D) = "bump" {}
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float2 uv_MainTex;
float2 uv_BumpMap;
};
sampler2D _MainTex;
sampler2D _BumpMap;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
o.Normal = UnpackNormal (tex2D (_BumpMap, IN.uv_BumpMap));
}
ENDCG
}
Fallback "Diffuse"
}

リム ライティング
今度は、リム ライティングを追加して、オブジェクトの縁をハイライトします。 表面の法線とビュー方向間の角度に基づいて、放射光を追加します。 そのため、viewDir 組み込みサーフェスシェーダ変数を使用します。
Shader "Example/Rim" {
Properties {
_MainTex ("Texture", 2D) = "white" { }
_BumpMap ("Bumpmap", 2D) = "bump" {}
_RimColor ("Rim Color", Color) = (0.26,0.19,0.16,0.0)
_RimPower ("Rim Power", Range(0.5,8.0)) = 3.0
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float2 uv_MainTex;
float2 uv_BumpMap;
float3 viewDir;
};
sampler2D _MainTex;
sampler2D _BumpMap;
float4 _RimColor;
float _RimPower;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
o.Normal = UnpackNormal (tex2D (_BumpMap, IN.uv_BumpMap));
half rim = 1.0 - saturate(dot (normalize(IN.viewDir), o.Normal));
o.Emission = _RimColor.rgb * pow (rim, _RimPower);
}
ENDCG
}
Fallback "Diffuse"
}

細部テクスチャ
異なる効果を与えるため、ベース テクスチャと結合する細部テクスチャを追加しましょう。 細部テクスチャは、同じ UV。 を使用しますが、通常、マテリアルでのタイリングが異なるため、異なる入力 UV 座標を使用する必要があります。
Shader "Example/Detail" {
Properties {
_MainTex ("Texture", 2D) = "white" { }
_BumpMap ("Bumpmap", 2D) = "bump" {}
_Detail ("Detail", 2D) = "gray" {}
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float2 uv_MainTex;
float2 uv_BumpMap;
float2 uv_Detail;
};
sampler2D _MainTex;
sampler2D _BumpMap;
sampler2D _Detail;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
o.Albedo *= tex2D (_Detail, IN.uv_Detail).rgb * 2;
o.Normal = UnpackNormal (tex2D (_BumpMap, IN.uv_BumpMap));
}
ENDCG
}
Fallback "Diffuse"
}
チェッカー テクスチャを使用してもあまり意味はありませんが、何が起こるかが分かります。
画面空間内の細部テクスチャ
画面空間での細部テクスチャはどうなりますか? 兵士の頭部モデルにはあまり意味はありませんが、組み込み screenPos 入力がどのように使用されるかが分かります。
Shader "Example/ScreenPos" {
Properties {
_MainTex ("Texture", 2D) = "white" { }
_Detail ("Detail", 2D) = "gray" {}
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float2 uv_MainTex;
float4 screenPos;
};
sampler2D _MainTex;
sampler2D _Detail;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
float2 screenUV = IN.screenPos.xy / IN.screenPos.w;
screenUV *= float2(8,6);
o.Albedo *= tex2D (_Detail, screenUV).rgb * 2;
}
ENDCG
}
Fallback "Diffuse"
}
短くするために、上記シェーダから法線マッピングを削除しました。
キューブマップ反射
以下は組み込み worldRefl 入力を使用してキューブマッピング反射を行うシェーダです。 組み込みの反射/デフューズ シェーダを使うのと実際非常に似ています。
Shader "Example/WorldRefl" {
Properties {
_MainTex ("Texture", 2D) = "white" { }
_Cube ("Cubemap", CUBE) = "" {}
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float2 uv_MainTex;
float3 worldRefl;
};
sampler2D _MainTex;
samplerCUBE _Cube;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb * 0.5;
o.Emission = texCUBE (_Cube, IN.worldRefl).rgb;
}
ENDCG
}
Fallback "Diffuse"
}
放出として反射色を割り当てるので、非常に輝いた兵士が得られます。
法線マップに影響される反射を行たい場合、わずかに関与させる必要があります。 INTERNAL_DATA を入力構造に追加し、WorldReflectionVector 関数を、法線出力記述後にピクセルごとの反射ベクトルを計算するために使用する必要があります。
Shader "Example/WorldRefl Normalmap" {
Properties {
_MainTex ("Texture", 2D) = "white" { }
_BumpMap ("Bumpmap", 2D) = "bump" {}
_Cube ("Cubemap", CUBE) = "" {}
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float2 uv_MainTex;
float2 uv_BumpMap;
float3 worldRefl;
INTERNAL_DATA
};
sampler2D _MainTex;
sampler2D _BumpMap;
samplerCUBE _Cube;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb * 0.5;
o.Normal = UnpackNormal (tex2D (_BumpMap, IN.uv_BumpMap));
o.Emission = texCUBE (_Cube, WorldReflectionVector (IN, o.Normal)).rgb;
}
ENDCG
}
Fallback "Diffuse"
}
これは、法線マッピングされた輝いている兵士です。
世界空間の位置を介したスライス
以下は、ほぼ水平の輪の中のピクセルを破棄することで、オブジェクトをスライスするシェーダです。 これは、ピクセルの世界での位置に基づいて、clip() Cg/HLSL 関数を使用して行います。 worldPos 組み込みサーフェスシェーダ変数を使用します。
Shader "Example/Slices" {
Properties {
_MainTex ("Texture", 2D) = "white" { }
_BumpMap ("Bumpmap", 2D) = "bump" {}
}
SubShader {
Tags { "RenderType" = "Opaque" }
Cull Off
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float2 uv_MainTex;
float2 uv_BumpMap;
float3 worldPos;
};
sampler2D _MainTex;
sampler2D _BumpMap;
void surf (Input IN, inout SurfaceOutput o) {
clip (frac((IN.worldPos.y+IN.worldPos.z*0.1) * 5) - 0.5);
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
o.Normal = UnpackNormal (tex2D (_BumpMap, IN.uv_BumpMap));
}
ENDCG
}
Fallback "Diffuse"
}

頂点修飾子のある法線押し出し
頂点シェーダで入ってくる頂点データを修正するvertex modifier関数を使用することができます。 これは、手続き型アニメーションや法線に沿った押し出しなどに使用できます。 このため、サーフェスシェーダ コンパイル ディレクティブ vertex:functionName が使用されますが、これは、inout appdata_full パラメータを取る関数です。
以下は、マテリアルで指定した量だけ法線に沿って頂点を動かすシェーダです。
Shader "Example/Normal Extrusion" {
Properties {
_MainTex ("Texture", 2D) = "white" { }
_Amount ("Extrusion Amount", Range(-1,1)) = 0.5
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert vertex:vert
struct Input {
float2 uv_MainTex;
};
float _Amount;
void vert (inout appdata_full v) {
v.vertex.xyz += v.normal * _Amount;
}
sampler2D _MainTex;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
}
ENDCG
}
Fallback "Diffuse"
}
法線に沿って頂点を動かすことで、兵士が太った外見になります。
頂点ごとに計算されたカスタムのデータ
頂点修飾子関数を使用すると、頂点シェーダ内のカスタムのデータを計算でき、これが、ピクセルごとにサーフェスシェーダ関数に渡されます。 同じコンパイル ディレクティブ vertex:functionName が使用されますが、この関数は、 inout appdata_full out Input の 2 つのパラメータを摂る必要があります。 ここで組み込み値ではない入力メンバーを記入できます。
下の例では、カスタムの float3 customColor メンバーを定義しますが、これは、頂点関数で計算されます。
Shader "Example/Custom Vertex Data" {
Properties {
_MainTex ("Texture", 2D) = "white" {}
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert vertex:vert
struct Input {
float2 uv_MainTex;
float3 customColor;
};
void vert (inout appdata_full v, out Input o) {
UNITY_INITIALIZE_OUTPUT(Input,o);
o.customColor = abs(v.normal);
}
sampler2D _MainTex;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
o.Albedo *= IN.customColor;
}
ENDCG
}
Fallback "Diffuse"
}
この例では、customColor が、法線の絶対値に設定されます。
より実践的な使用により、組み込み入力変数では提供していない頂点ごとのデータが計算されるか、シェーダの計算が最適化されます。 例えば、ピクセルごとにサーフェスシェーダで計算する代わりに、オブジェクトの頂点でリム ライティングを計算することができます。
最終色モディファイア
"final color modifier"関数を使うと、シェーダで最終段階の色を計算して適応できます。 finalcolor:functionName で指定されたサーフェスシェーダはそのために使います。そこでは Input IN, SurfaceOutput o, inout fixed4 color パラメータを使います。
以下は最終色モディファイアの簡単なシェーダです。これは表面アルベドカラーに違う色を適応しています。この色はライトマップやライトプローブなどから持ってきたどんな色でも影響される色です。
Shader "Example/Tint Final Color" {
Properties {
_MainTex ("Texture", 2D) = "white" {}
_ColorTint ("Tint", Color) = (1.0, 0.6, 0.6, 1.0)
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert finalcolor:mycolor
struct Input {
float2 uv_MainTex;
};
fixed4 _ColorTint;
void mycolor (Input IN, SurfaceOutput o, inout fixed4 color)
{
color *= _ColorTint;
}
sampler2D _MainTex;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
}
ENDCG
}
Fallback "Diffuse"
}

最終色モディファイアを使ったカスタムフォグ
上の最終色モディファイアを使ったよくあるケースとして、カスタムフォグの実装があります。フォグは最後にコピュートシェーダで影響しなければいけません。つまり、まさに finalcolor モディファイアが行うのです。
これはスクリーンの中心からの距離に応じたフォグ色を適応したシェーダの例です。これはカスタム頂点データ( fog )頂点モディファイアと最終色モディファイアが組合わさっています。フォワードレンダリングの加算パスを使うときは、フォグは黒色にフェードする必要があります。この例では、UNITY_PASS_FORWARDADD のチェックと同様にことをしています。
Shader "Example/Fog via Final Color" {
Properties {
_MainTex ("Texture", 2D) = "white" {}
_FogColor ("Fog Color", Color) = (0.3, 0.4, 0.7, 1.0)
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert finalcolor:mycolor vertex:myvert
struct Input {
float2 uv_MainTex;
half fog;
};
void myvert (inout appdata_full v, out Input data)
{
UNITY_INITIALIZE_OUTPUT(Input,v);
float4 hpos = mul (UNITY_MATRIX_MVP, v.vertex);
data.fog = min (1, dot (hpos.xy, hpos.xy) * 0.1);
}
fixed4 _FogColor;
void mycolor (Input IN, SurfaceOutput o, inout fixed4 color)
{
fixed3 fogColor = _FogColor.rgb;
#ifdef UNITY_PASS_FORWARDADD
fogColor = 0;
#endif
color.rgb = lerp (color.rgb, fogColor, IN.fog);
}
sampler2D _MainTex;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
}
ENDCG
}
Fallback "Diffuse"
}

SL-SurfaceShaderLighting
Surface Shaders を記述する際、表面のプロパティ (アルベド、色、法線など) を記述します。ライティングの相互作用は、Lighting Model によって計算されます。 組み込みライティング モデルは、Lambert (デフューズ ライティング) と BlinnPhong (スペキュラ ライティング) です。
カスタムのライティング モデルを使用したい場合があると思いますが、これは、サーフェスシェーダで行うことができます。 ライティング モデルは、慣例に一致する幾つかの Cg/HLSL 関数にすぎません。 組み込み Lambert and BlinnPhong モデルは、Unity 内の Lighting.cginc ファイルで定義されます (Windows の場合、、Mac の場合、)。
ライティング モデルの宣言
ライティング モデルは、Lighting で始まる名前のある通常の関数です。 これらは、シェーダ ファイルまたは含まれているファイルのいずれかの任意の場所で宣言できます。 以下の関数があります。
half4 LightingName (SurfaceOutput s, half3 lightDir, half atten);これは、ビュー方向に依存していないライト モデルのフォワード レンダリング パスで使用されます (例: デフューズ)。half4 LightingName (SurfaceOutput s, half3 lightDir, half3 viewDir, half atten);これは、ビュー方向に依存しているライト モデルのフォワード レンダリング パスで使用されます (例: デフューズ)。half4 LightingName_PrePass (SurfaceOutput s, half4 light);これは、遅延ライティング パスに使用されます。
すべての関数を宣言する必要はありません。 ライティング モデルは、ビュー方向を使用するか、しないかのいずれかです。 同様に、ライティング モデルが遅延ライティングで機能しない場合、_PrePass 関数を宣言せず、それを使用するすべてのシェーダは、フォワード レンダリングに対してのみコンパイルを行います。
Decoding directional lightmaps needs to be customized in some circumstances in a similar fashion as the lighting function for forward and deferred lighting. Use one of the functions below depending on whether your light model is view direction dependent or not. Both functions handle forward and deferred lighting rendering paths automatically.
half4 LightingName_DirLightmap (SurfaceOutput s, fixed4 color, fixed4 scale, bool surfFuncWritesNormal);This is used for light models that are not view direction dependent (e.g. diffuse).half4 LightingName_DirLightmap (SurfaceOutput s, fixed4 color, fixed4 scale, half3 viewDir, bool surfFuncWritesNormal, out half3 specColor);This is used for light models that are view direction dependent.
例
Surface Shader Lighting Examples
Page last updated: 2012-11-13SL-SurfaceShaderLightingExamples
以下は Surface Shaders の custom lighting models のサンプルです。 一般的なサーフェスシェーダの例は、this page にあります。
遅延ライティングは、カスタムのマテリアルごとのライティング モデルによってはうまく機能しないため、以下のほとんどの例では、シェーダにはフォワード レンダリングにのみコンパイルさせます。
拡散
組み込みランバート ライティング モデルを使用するシェーダから始めましょう。
Shader "Example/Diffuse Texture" {
Properties {
_MainTex ("Texture", 2D) = "white" { }
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float2 uv_MainTex;
};
sampler2D _MainTex;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
}
ENDCG
}
Fallback "Diffuse"
}
以下は、テクスチャがある場合と、実際のテクスチャがない場合にどのように見えるかの例です (1 つのディレクショナル ライトがシーンにあります)。
まったく同じ事をしますが、組み込みのランバート ライティング モデルを使用する代わりに、自身のライティング モデルを記述します。 Surface Shader Lighting Models は、記述する必要のある関数です。 以下は簡単なランバート ライティング モデルです。 シェーダ部分自体は変わりませんでした (グレーアウトサれています):
Shader "Example/Diffuse Texture" { Properties { _MainTex ("Texture", 2D) = "white" {} } SubShader { Tags { "RenderType" = "Opaque" } CGPROGRAM #pragma surface surf SimpleLambert half4 LightingSimpleLambert (SurfaceOutput s, half3 lightDir, half atten) { half NdotL = dot (s.Normal, lightDir); half4 c; c.rgb = s.Albedo * _LightColor0.rgb * (NdotL * atten * 2); c.a = s.Alpha; return c; } struct Input { float2 uv_MainTex; }; sampler2D _MainTex; void surf (Input IN, inout SurfaceOutput o) { o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb; } ENDCG } Fallback "Diffuse" }
そのため、ここでの簡単なデフューズ ライティング モデルは、LightingSimpleLambert 関数です。 表面法線と光方向間のドット積を行うことでライティングを計算し、光の軽減および色を適用します。
デフューズ ラップ
以下はラップされたデフューズ、デフューズ ライティングの変形で、照明がオブジェクトの端をラップします。 サブ表面散乱効果のフェークに便利です。 改めて言いますが、サーフェスシェーダ自体はまったく変わっていません。別のライティング関数を使用しているだけです。
Shader "Example/Diffuse Wrapped" { Properties { _MainTex ("Texture", 2D) = "white" {} } SubShader { Tags { "RenderType" = "Opaque" } CGPROGRAM #pragma surface surf WrapLambert half4 LightingWrapLambert (SurfaceOutput s, half3 lightDir, half atten) { half NdotL = dot (s.Normal, lightDir); half diff = NdotL * 0.5 + 0.5; half4 c; c.rgb = s.Albedo * _LightColor0.rgb * (diff * atten * 2); c.a = s.Alpha; return c; } struct Input { float2 uv_MainTex; }; sampler2D _MainTex; void surf (Input IN, inout SurfaceOutput o) { o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb; } ENDCG } Fallback "Diffuse" }

トゥーン ランプ
以下は、表面が光と法線間の角度にどのように対応するかを定義するためにテクスチャを使用するRampライティング モデルです。 これは、トゥーン ライティングなどの各種効果に使用できます。
Shader "Example/Toon Ramp" { Properties { _MainTex ("Texture", 2D) = "white" {} _Ramp ("Shading Ramp", 2D) = "gray" {} } SubShader { Tags { "RenderType" = "Opaque" } CGPROGRAM #pragma surface surf Ramp sampler2D _Ramp; half4 LightingRamp (SurfaceOutput s, half3 lightDir, half atten) { half NdotL = dot (s.Normal, lightDir); half diff = NdotL * 0.5 + 0.5; half3 ramp = tex2D (_Ramp, float2(diff)).rgb; half4 c; c.rgb = s.Albedo * _LightColor0.rgb * ramp * (atten * 2); c.a = s.Alpha; return c; } struct Input { float2 uv_MainTex; }; sampler2D _MainTex; void surf (Input IN, inout SurfaceOutput o) { o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb; } ENDCG } Fallback "Diffuse" }


簡単なスペキュラ
以下は簡単なスペキュラ ライティング モデルです。 組み込みの BlinnPhong が実際に行うことを非常に簡単に行えます。ここでは、どのように行われるかを例示するためにこれを挿入します。
Shader "Example/Simple Specular" { Properties { _MainTex ("Texture", 2D) = "white" {} } SubShader { Tags { "RenderType" = "Opaque" } CGPROGRAM #pragma surface surf SimpleSpecular half4 LightingSimpleSpecular (SurfaceOutput s, half3 lightDir, half3 viewDir, half atten) { half3 h = normalize (lightDir + viewDir); half diff = max (0, dot (s.Normal, lightDir)); float nh = max (0, dot (s.Normal, h)); float spec = pow (nh, 48.0); half4 c; c.rgb = (s.Albedo * _LightColor0.rgb * diff + _LightColor0.rgb * spec) * (atten * 2); c.a = s.Alpha; return c; } struct Input { float2 uv_MainTex; }; sampler2D _MainTex; void surf (Input IN, inout SurfaceOutput o) { o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb; } ENDCG } Fallback "Diffuse" }

SL-SurfaceShaderTessellation
Surface Shaders have some support for DirectX 11 GPU Tessellation. Idea is:
- Tessellation is indicated by
tessellate:FunctionNamemodifier. That function computes triangle edge and inside tessellation factors. - When tessellation is used, "vertex modifier" (
vertex:FunctionName) is invoked after tessellation, for each generated vertex in the domain shader. Here you'd typically to displacement mapping. - Surface shaders can optionally compute phong tessellation to smooth model surface even without any displacement mapping.
Current limitations of tessellation support:
- Only triangle domain - no quads, no isoline tessellation.
- When tessellation is used, shader is automatically compiled into Shader Model 5.0 target, which means it will only work on DX11.
No GPU tessellation, displacement in the vertex modifier
Let's start with a surface shader that does some displacement mapping without using tessellation. It just moves vertices along their normals based on amount coming from a displacement map:
Shader "Tessellation Sample" {
Properties {
_MainTex ("Base (RGB)", 2D) = "white" {}
_DispTex ("Disp Texture", 2D) = "gray" {}
_NormalMap ("Normalmap", 2D) = "bump" {}
_Displacement ("Displacement", Range(0, 1.0)) = 0.3
_Color ("Color", color) = (1,1,1,0)
_SpecColor ("Spec color", color) = (0.5,0.5,0.5,0.5)
}
SubShader {
Tags { "RenderType"="Opaque" }
LOD 300
CGPROGRAM
#pragma surface surf BlinnPhong addshadow fullforwardshadows vertex:disp nolightmap
#pragma target 5.0
struct appdata {
float4 vertex : POSITION;
float4 tangent : TANGENT;
float3 normal : NORMAL;
float2 texcoord : TEXCOORD0;
};
sampler2D _DispTex;
float _Displacement;
void disp (inout appdata v)
{
float d = tex2Dlod(_DispTex, float4(v.texcoord.xy,0,0)).r * _Displacement;
v.vertex.xyz += v.normal * d;
}
struct Input {
float2 uv_MainTex;
};
sampler2D _MainTex;
sampler2D _NormalMap;
fixed4 _Color;
void surf (Input IN, inout SurfaceOutput o) {
half4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
o.Albedo = c.rgb;
o.Specular = 0.2;
o.Gloss = 1.0;
o.Normal = UnpackNormal(tex2D(_NormalMap, IN.uv_MainTex));
}
ENDCG
}
FallBack "Diffuse"
}
The above shader is fairly standard, points of intetest:
- Vertex modifier
dispsamples the displacement map and moves vertices along their normals. - It uses custom "vertex data input" structure (
appdata) instead of defaultappdata_full. This is not needed yet, but it's more efficient for tessellation to use as small structure as possible. - Since our vertex data does not have 2nd UV coordinate, we add
nolightmapdirective to exclude lightmaps.
Here's how some simple objects would look like with this shader:

Fixed amount of tessellation
Let's add fixed amount of tessellation, i.e. the same tessellation level for the whole mesh. This approach is suitable if your model's faces are roughly the same size on screen. Some script could then change the tessellation level from code, based on distance to the camera.
Shader "Tessellation Sample" {
Properties {
_Tess ("Tessellation", Range(1,32)) = 4
_MainTex ("Base (RGB)", 2D) = "white" {}
_DispTex ("Disp Texture", 2D) = "gray" {}
_NormalMap ("Normalmap", 2D) = "bump" {}
_Displacement ("Displacement", Range(0, 1.0)) = 0.3
_Color ("Color", color) = (1,1,1,0)
_SpecColor ("Spec color", color) = (0.5,0.5,0.5,0.5)
}
SubShader {
Tags { "RenderType"="Opaque" }
LOD 300
CGPROGRAM
#pragma surface surf BlinnPhong addshadow fullforwardshadows vertex:disp tessellate:tessFixed nolightmap
#pragma target 5.0
struct appdata {
float4 vertex : POSITION;
float4 tangent : TANGENT;
float3 normal : NORMAL;
float2 texcoord : TEXCOORD0;
};
float _Tess;
float4 tessFixed()
{
return _Tess;
}
sampler2D _DispTex;
float _Displacement;
void disp (inout appdata v)
{
float d = tex2Dlod(_DispTex, float4(v.texcoord.xy,0,0)).r * _Displacement;
v.vertex.xyz += v.normal * d;
}
struct Input {
float2 uv_MainTex;
};
sampler2D _MainTex;
sampler2D _NormalMap;
fixed4 _Color;
void surf (Input IN, inout SurfaceOutput o) {
half4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
o.Albedo = c.rgb;
o.Specular = 0.2;
o.Gloss = 1.0;
o.Normal = UnpackNormal(tex2D(_NormalMap, IN.uv_MainTex));
}
ENDCG
}
FallBack "Diffuse"
}
The tessellation function, tessFixed in our shader, returns four tessellation factors as a single float4 value: tree factors for each edge of the triangle, and one factor for the inside of the triangle. Here, we just return a constant value that is set in material properties.

Distance-based tessellation
We can also change tessellation level based on distance from the camera. For example, we could define two distance values; distance at which tessellation is at maximum (say, 10 meters), and distance towards which tessellation level gradually decreases (say, 20 meters).
Shader "Tessellation Sample" {
Properties {
_Tess ("Tessellation", Range(1,32)) = 4
_MainTex ("Base (RGB)", 2D) = "white" {}
_DispTex ("Disp Texture", 2D) = "gray" {}
_NormalMap ("Normalmap", 2D) = "bump" {}
_Displacement ("Displacement", Range(0, 1.0)) = 0.3
_Color ("Color", color) = (1,1,1,0)
_SpecColor ("Spec color", color) = (0.5,0.5,0.5,0.5)
}
SubShader {
Tags { "RenderType"="Opaque" }
LOD 300
CGPROGRAM
#pragma surface surf BlinnPhong addshadow fullforwardshadows vertex:disp tessellate:tessDistance nolightmap
#pragma target 5.0
#include "Tessellation.cginc"
struct appdata {
float4 vertex : POSITION;
float4 tangent : TANGENT;
float3 normal : NORMAL;
float2 texcoord : TEXCOORD0;
};
float _Tess;
float4 tessDistance (appdata v0, appdata v1, appdata v2) {
float minDist = 10.0;
float maxDist = 25.0;
return UnityDistanceBasedTess(v0.vertex, v1.vertex, v2.vertex, minDist, maxDist, _Tess);
}
sampler2D _DispTex;
float _Displacement;
void disp (inout appdata v)
{
float d = tex2Dlod(_DispTex, float4(v.texcoord.xy,0,0)).r * _Displacement;
v.vertex.xyz += v.normal * d;
}
struct Input {
float2 uv_MainTex;
};
sampler2D _MainTex;
sampler2D _NormalMap;
fixed4 _Color;
void surf (Input IN, inout SurfaceOutput o) {
half4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
o.Albedo = c.rgb;
o.Specular = 0.2;
o.Gloss = 1.0;
o.Normal = UnpackNormal(tex2D(_NormalMap, IN.uv_MainTex));
}
ENDCG
}
FallBack "Diffuse"
}
Here the tessellation function takes three parameters; the vertex data of three triangle corners before tessellation. This is needed to compute tessellation levels, which depend on vertex positions now. We include a built-in helper file Tessellation.cginc and call UnityDistanceBasedTess function from it to do all the work. That function computes distance of each vertex to the camera and derives final tessellation factors.

Edge length based tessellation
Purely distance based tessellation is good only when triangle sizes are quite similar. In the image above, you can see that objects that have small triangles are tessellated too much, while objects that have large triangles aren't tessellated enough.
Instead, tessellation levels could be computed based on triangle edge length on the screen - the longer the edge, the larger tessellation factor should be applied.
Shader "Tessellation Sample" {
Properties {
_EdgeLength ("Edge length", Range(2,50)) = 15
_MainTex ("Base (RGB)", 2D) = "white" {}
_DispTex ("Disp Texture", 2D) = "gray" {}
_NormalMap ("Normalmap", 2D) = "bump" {}
_Displacement ("Displacement", Range(0, 1.0)) = 0.3
_Color ("Color", color) = (1,1,1,0)
_SpecColor ("Spec color", color) = (0.5,0.5,0.5,0.5)
}
SubShader {
Tags { "RenderType"="Opaque" }
LOD 300
CGPROGRAM
#pragma surface surf BlinnPhong addshadow fullforwardshadows vertex:disp tessellate:tessEdge nolightmap
#pragma target 5.0
#include "Tessellation.cginc"
struct appdata {
float4 vertex : POSITION;
float4 tangent : TANGENT;
float3 normal : NORMAL;
float2 texcoord : TEXCOORD0;
};
float _EdgeLength;
float4 tessEdge (appdata v0, appdata v1, appdata v2)
{
return UnityEdgeLengthBasedTess (v0.vertex, v1.vertex, v2.vertex, _EdgeLength);
}
sampler2D _DispTex;
float _Displacement;
void disp (inout appdata v)
{
float d = tex2Dlod(_DispTex, float4(v.texcoord.xy,0,0)).r * _Displacement;
v.vertex.xyz += v.normal * d;
}
struct Input {
float2 uv_MainTex;
};
sampler2D _MainTex;
sampler2D _NormalMap;
fixed4 _Color;
void surf (Input IN, inout SurfaceOutput o) {
half4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
o.Albedo = c.rgb;
o.Specular = 0.2;
o.Gloss = 1.0;
o.Normal = UnpackNormal(tex2D(_NormalMap, IN.uv_MainTex));
}
ENDCG
}
FallBack "Diffuse"
}
Here again, we just call UnityEdgeLengthBasedTess function from Tessellation.cginc to do all the actual work.

For performance reasons, it's advisable to call UnityEdgeLengthBasedTessCull function instead, which will do patch frustum culling. This makes the shader a bit more expensive, but saves a lot of GPU work for parts of meshes that are outside of camera's view.
Phong Tessellation
Phong Tessellation modifies positions of the subdivided faces so that the resulting surface follows the mesh normals a bit. It's quite an effective way of making low-poly meshes become more smooth.
Unity's surface shaders can compute Phong tessellation automatically using tessphong:VariableName compilation directive. Here's an example shader:
Shader "Phong Tessellation" {
Properties {
_EdgeLength ("Edge length", Range(2,50)) = 5
_Phong ("Phong Strengh", Range(0,1)) = 0.5
_MainTex ("Base (RGB)", 2D) = "white" {}
_Color ("Color", color) = (1,1,1,0)
}
SubShader {
Tags { "RenderType"="Opaque" }
LOD 300
CGPROGRAM
#pragma surface surf Lambert vertex:dispNone tessellate:tessEdge tessphong:_Phong nolightmap
#include "Tessellation.cginc"
struct appdata {
float4 vertex : POSITION;
float3 normal : NORMAL;
float2 texcoord : TEXCOORD0;
};
void dispNone (inout appdata v) { }
float _Phong;
float _EdgeLength;
float4 tessEdge (appdata v0, appdata v1, appdata v2)
{
return UnityEdgeLengthBasedTess (v0.vertex, v1.vertex, v2.vertex, _EdgeLength);
}
struct Input {
float2 uv_MainTex;
};
fixed4 _Color;
sampler2D _MainTex;
void surf (Input IN, inout SurfaceOutput o) {
half4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
o.Albedo = c.rgb;
o.Alpha = c.a;
}
ENDCG
}
FallBack "Diffuse"
}
Here's a comparison between regular shader (top row) and one that uses Phong tessellation (bottom row). You can see that even without any displacement mapping, the surface becomes more round.

SL-ShaderPrograms
ShaderLab は単なるハードウェアシェーダー以上の機能を含みます。たくさんのことができます。まず、Properties(プロパティ:属性)はマテリアルのインスペクターに表示するものや、違うグラフィックスハードで動けるように複数のシェーダを含んだり、ハードが用意した固定関数を調整したり、などなどです。実際のプログラマブルシェーダ(頂点シェーダーやFragmentシェーダのプログラムのような)はShaderLabのシェーダのコンセプトの単なる一部です。shader tutorial で基本的な説明をします。 ここではローレベルなハードに近いシェーダを shader programsと呼ぶことにします。
もしライティングに反応するシェーダを書きたいなら、 Surface Shaders ドキュメントを読んで下さい。 ここでは、Unityのライトで反応しないシェーダについて言及します。(例:スペシャルエフェクト、Image Effects 等)
Shader programs は Cg / HLSL 言語で書かれています。シェーダーテキストの中にPass コマンドの中に「スニペット」が埋め込まれます。だいたい以下のような感じになります:
Pass {
// ... いつものパスステートの設定 ...
CGPROGRAM
// このスニペットのためのコンパイル命令, 例えば:
#pragma vertex vert
#pragma fragment frag
// Cg コード
ENDCG
// ... 残りのパスの設定 ...
}
Cg スニペット
Cg プログラムスニペットはCGPROGRAM と ENDCG間に書かれます。
まずは#pragmaステートメントで始まります。ここの命令はコンパイルするため、どんなシェーダ関数なのかを指し示しています。:
- #pragma vertex name - 頂点シェーダとして定義される name という名前の関数
- #pragma fragment name - Fragment シェーダとして定義される name という名前の関数
- #pragma geometry name - DX10のジオメトリシェーダとして定義される name という名前の関数。このオプションは自動的に #pragma target 4.0 のスイッチオンとなります。詳しくは 下 を見て下さい。
- #pragma hull name - DX11のHullシェーダとして定義される name という名前の関数。このオプションは自動的に #pragma target 5.0 のスイッチオンとなります。詳しくは 下 を見て下さい。
- #pragma domain name - DX11のDomeinシェーダとして定義される name という名前の関数。このオプションは自動的に #pragma target 5.0 のスイッチオンとなります。詳しくは 下 を見て下さい。
その他のコンパイル命令:
- #pragma target name - which shader target to compile to. See shader targets for details.
- #pragma only_renderers space separated names - compile shader only for given renderers. By default shaders are compiled for all renderers. See renderers for details.
- #pragma exclude_renderers space separated names - do not compile shader for given renderers. By default shaders are compiled for all renderers. See renderers for details.
- #pragma glsl - when compiling shaders for desktop OpenGL platforms, convert Cg/HLSL into GLSL (instead of default setting which is ARB vertex/fragment programs). Use this to enable derivative instructions, texture sampling with explicit LOD levels, etc.
- #pragma glsl_no_auto_normalization - when compiling shaders for mobile GLSL (iOS/Android), turn off automatic normalization of normal & tangent vectors. By default, normals and tangents are normalized in the vertex shader on iOS/Android platforms.
- #pragma fragmentoption option - adds option to the compiled OpenGL fragment program. See the ARB fragment program specification for a list of allowed options. This directive has no effect on vertex programs or programs that are compiled to non-OpenGL targets.
Each snippet must contain a vertex program, a fragment program, or both. Thus a #pragma vertex or #pragma fragment directive is required, or both.
Shader targets
By default, Unity compiles shaders into roughly shader model 2.0 equivalent. Using #pragma target allows shaders to be compiled into other capability levels. Currently these targets are supported:
- #pragma target 2.0 (default) - roughly shader model 2.0
- Shader Model 2.0 on Direct3D 9.
- ARB_vertex_program with 256 instruction limit and ARB_fragment_program with 96 instruction limit (32 texture + 64 arithmetic), 16 temporary registers and 4 texture indirections.
- #pragma target 3.0 - compile to shader model 3.0:
- Shader Model 3.0 on Direct3D 9.
- ARB_vertex_program with no instruction limit and ARB_fragment_program with 1024 instruction limit (512 texture + 512 arithmetic), 32 temporary registers and 4 texture indirections. It is possible to override these limits using #pragma profileoption directive. E.g.
#pragma profileoption MaxTexIndirections=256raises texture indirections limit to 256. Note that some shader model 3.0 features, like derivative instructions, aren't supported by ARB_vertex_program/ARB_fragment_program. You can use #pragma glsl to translate to GLSL instead which has fewer restrictions.
- #pragma target 4.0 - compile to DX10 shader model 4.0. This target is currently only supported by DirectX 11 renderer.
- #pragma target 5.0 - compile to DX11 shader model 5.0. This target is currently only supported by DirectX 11 renderer.
Rendering platforms
Unity supports several rendering APIs (e.g. Direct3D 9 and OpenGL), and by default all shader programs are compiled into for supported renderers. You can indicate which renderers to compile to using #pragma only_renderers or #pragma exclude_renderers directives. This is useful if you know you will only target Mac OS X (where there's no Direct3D), or only Windows (where Unity defaults to D3D), or if some particular shader is only possible in one renderer and not others. Currently supported renderer names are:
- d3d9 - Direct3D 9.
- d3d11 - Direct3D 11.
- opengl - OpenGL.
- gles - OpenGL ES 2.0.
- xbox360 - Xbox 360.
- ps3 - PlayStation 3.
- flash - Flash.
For example, this line would only compile shader into D3D9 mode:
#pragma only_renderers d3d9
Subsections
- Accessing shader properties in Cg
- 頂点プログラムへ頂点データの流し込み
- Built-in shader include files
- Predefined shader preprocessor macros
- シェーダ プログラム内の組み込み状態変数
- GLSL Shader Programs
SL-PropertiesInPrograms
Shader declares Material properties in a Properties block. If you want to access some of those properties in a shader program, you need to declare a Cg/HLSL variable with the same name and a matching type. An example is provided in Shader Tutorial: Vertex and Fragment Programs.
For example these shader properties:
_MyColor ("Some Color", Color) = (1,1,1,1)
_MyVector ("Some Vector", Vector) = (0,0,0,0)
_MyFloat ("My float", Float) = 0.5
_MyTexture ("Texture", 2D) = "white" {}
_MyCubemap ("Cubemap", CUBE) = "" {}
would be declared for access in Cg/HLSL code as:
fixed4 _MyColor; // low precision type is enough for colors float4 _MyVector; float _MyFloat; sampler2D _MyTexture; samplerCUBE _MyCubemap;
Cg can also accept uniform keyword, but it is not necessary:
uniform float4 _MyColor;
Property types in ShaderLab map to Cg/HLSL variable types this way:
- Color and Vector properties map to float4, half4 or fixed4 variables.
- Range and Float properties map to float, half or fixed variables.
- Texture properties map to sampler2D variables for regular (2D) textures; Cubemaps map to samplerCUBE; and 3D textures map to sampler3D.
SL-VertexProgramInputs
Cg/HLSL vertex programs に対して、頂点データを構造として渡す必要があります。 幾つかのよく使用される頂点構造は、UnityCG.cginc include file で定義され、ほとんどの場合、使用するだけなら十分です。 構造は以下のようになります。
- appdata_base: 頂点は、位置、法線および 1 つのテクスチャ座標で構成されます。
- appdata_tan: 頂点は、位置、接線、法線および 1 つのテクスチャ座標で構成されます。
- appdata_full: vertex consists of position, tangent, normal, two texture coordinates and color.
例えば、このシェーダは、その法線に基づいて、メッシュに色を付け、頂点プログラム入力として、appdata_base を使用します。
Shader "VertexInputSimple" {
SubShader {
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f {
float4 pos : SV_POSITION;
fixed4 color : COLOR;
};
v2f vert (appdata_base v)
{
v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
o.color.xyz = v.normal * 0.5 + 0.5;
o.color.w = 1.0;
return o;
}
fixed4 frag (v2f i) : COLOR0 { return i.color; }
ENDCG
}
}
}
各種頂点データにアクセスしたい場合は、頂点構造を自身で宣言する必要があります。 この構造メンバーは、次のリストから選択する必要があります。
- float4 vertex は、頂点の位置です。
- float3 normal は、頂点の法線です。
- float4 texcoord は、1 つ目の UV 座標です。
- float4 texcoord1 は、2 つ目の UV 座標です。
- float4 tangent は、接線ベクトルです (法線マッピングに使用されます)。
- float4 color は、頂点ごとの色です。
例
UV の表示
次のシェーダのサンプルは、(構造 appdata で定義された) 頂点シェーダ入力として、頂点位置と 1 つ目のテクスチャ座標を使用します。 メッシュの UV 座標をデバッグするのに非常に便利です。 UV 座標は、赤色および緑色で表示され、0 〜 1 の範囲外にある座標には、更に青色が適用されます。
Shader "!Debug/UV 1" {
SubShader {
Pass {
Fog { Mode Off }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
// vertex input: position, UV
struct appdata {
float4 vertex : POSITION;
float4 texcoord : TEXCOORD0;
};
struct v2f {
float4 pos : SV_POSITION;
float4 uv : TEXCOORD0;
};
v2f vert (appdata v) {
v2f o;
o.pos = mul( UNITY_MATRIX_MVP, v.vertex );
o.uv = float4( v.texcoord.xy, 0, 0 );
return o;
}
half4 frag( v2f i ) : COLOR {
half4 c = frac( i.uv );
if (any(saturate(i.uv) - i.uv))
c.b = 0.5;
return c;
}
ENDCG
}
}
}

トーラス ノット モデルに適用された UV1 シェーダをデバッグ
同様に、このシェーダは、モデルの 2 つ目の UV を表示します。
Shader "!Debug/UV 2" {
SubShader {
Pass {
Fog { Mode Off }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
// vertex input: position, second UV
struct appdata {
float4 vertex : POSITION;
float4 texcoord1 : TEXCOORD1;
};
struct v2f {
float4 pos : SV_POSITION;
float4 uv : TEXCOORD0;
};
v2f vert (appdata v) {
v2f o;
o.pos = mul( UNITY_MATRIX_MVP, v.vertex );
o.uv = float4( v.texcoord1.xy, 0, 0 );
return o;
}
half4 frag( v2f i ) : COLOR {
half4 c = frac( i.uv );
if (any(saturate(i.uv) - i.uv))
c.b = 0.5;
return c;
}
ENDCG
}
}
}
頂点色の表示
次のシェーダは、(構造 appdata で定義された) 頂点シェーダ入力として、頂点位置と頂点ごとの色を使用します。
Shader "!Debug/Vertex color" {
SubShader {
Pass {
Fog { Mode Off }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
// vertex input: position, color
struct appdata {
float4 vertex : POSITION;
fixed4 color : COLOR;
};
struct v2f {
float4 pos : SV_POSITION;
fixed4 color : COLOR;
};
v2f vert (appdata v) {
v2f o;
o.pos = mul( UNITY_MATRIX_MVP, v.vertex );
o.color = v.color;
return o;
}
fixed4 frag (v2f i) : COLOR0 { return i.color; }
ENDCG
}
}
}

照明が色にベークされたモデルに適用された色シェーダをデバッグ
法線の表示
次のシェーダは、(構造 appdata で定義された) 頂点シェーダ入力として、頂点位置と法線を使用します。 法線の X、Y、Z 成分は、R、G、B 色として表示されます。 法線成分は、-1..1 の範囲で、色は -1..1 の範囲にあるため、出力色が表示可能な 0..1 の範囲になるよう、法線をスケールおよびバイアスします。
Shader "!Debug/Normals" {
SubShader {
Pass {
Fog { Mode Off }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
// vertex input: position, normal
struct appdata {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
struct v2f {
float4 pos : SV_POSITION;
fixed4 color : COLOR;
};
v2f vert (appdata v) {
v2f o;
o.pos = mul( UNITY_MATRIX_MVP, v.vertex );
o.color.xyz = v.normal * 0.5 + 0.5;
o.color.w = 1.0;
return o;
}
fixed4 frag (v2f i) : COLOR0 { return i.color; }
ENDCG
}
}
}

モデルに適用された法線シェーダをデバッグ。 モデルにハード シェーディング エッジがあるのが分かるでしょう。
接線および従法線の表示
接線および従法線ベクトルは、法線マッピングに使用されます。 Unity では、接線ベクトルは、頂点に格納され、従法線ベクトルは、法線および接線から派生します。
次のシェーダは、(構造 appdata で定義された) 頂点シェーダ入力として、頂点位置と接線を使用します。 接線の X、Y、Z 成分は、R、G、B 色として表示されます。 法線成分は、-1..1 の範囲で、色は -1..1 の範囲にあるため、出力色が表示可能な 0..1 の範囲になるよう、法線をスケールおよびバイアスします。
Shader "!Debug/Tangents" {
SubShader {
Pass {
Fog { Mode Off }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
// 頂点入力: position, tangent
struct appdata {
float4 vertex : POSITION;
float4 tangent : TANGENT;
};
struct v2f {
float4 pos : SV_POSITION;
fixed4 color : COLOR;
};
v2f vert (appdata v) {
v2f o;
o.pos = mul( UNITY_MATRIX_MVP, v.vertex );
o.color = v.tangent * 0.5 + 0.5;
return o;
}
fixed4 frag (v2f i) : COLOR0 { return i.color; }
ENDCG
}
}
}

モデルに適用された接線シェーダをデバッグ。
次のシェーダは従法線を表示します。 これは、頂点入力として、位置、法線および接線を使用します。 従法線は、法線および接線から計算されます。 法線または接線と同様、表示可能な 0..1 の範囲になるよう、スケールおよびバイアスする必要があります。
Shader "!Debug/Binormals" {
SubShader {
Pass {
Fog { Mode Off }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
// 頂点入力: position, normal, tangent
struct appdata {
float4 vertex : POSITION;
float3 normal : NORMAL;
float4 tangent : TANGENT;
};
struct v2f {
float4 pos : SV_POSITION;
float4 color : COLOR;
};
v2f vert (appdata v) {
v2f o;
o.pos = mul( UNITY_MATRIX_MVP, v.vertex );
// 従法線を計算
float3 binormal = cross( v.normal, v.tangent.xyz ) * v.tangent.w;
o.color.xyz = binormal * 0.5 + 0.5;
o.color.w = 1.0;
return o;
}
fixed4 frag (v2f i) : COLOR0 { return i.color; }
ENDCG
}
}
}

モデルに適用された従法線シェーダをデバッグ。 すばらしい!
SL-BuiltinIncludes
Unity contains several files that can be used by your shader programs to bring in predefined variables and helper functions. This is done by the standard #include directive, e.g.:
CGPROGRAM
// ...
#include "UnityCG.cginc"
// ...
ENDCG
Shader include files in Unity are with .cginc extension, and the built-in ones are:
HLSLSupport.cginc- (automatically included) Helper macros and definitions for cross-platform shader compilation.UnityCG.cginc- commonly used global variables and helper functions.AutoLight.cginc- lighting & shadowing functionality, e.g. surface shaders use this file internally.Lighting.cginc- standard surface shader lighting models; automatically included when you're writing surface shaders.TerrainEngine.cginc- helper functions for Terrain & Vegetation shaders.
These files are found inside Unity application ( on Windows, on Mac), if you want to take a look at what exactly is done in any of the helper code.
HLSLSupport.cginc
This file is automatically included when compiling shaders. It mostly declares various preprocessor macros to aid in multi-platform shader development.
UnityCG.cginc
This file is often included in Unity shaders to bring in many helper functions and definitions.
Data structures in UnityCG.cginc
- struct
appdata_base: vertex shader input with position, normal, one texture coordinate. - struct
appdata_tan: vertex shader input with position, normal, tangent, one texture coordinate. - struct
appdata_full: vertex shader input with position, normal, tangent, vertex color and two texture coordinates. - struct
appdata_img: vertex shader input with position and one texture coordinate.
Generic helper functions in UnityCG.cginc
float3 WorldSpaceViewDir (float4 v)- returns world space direction (not normalized) from given object space vertex position towards the camera.float3 ObjSpaceViewDir (float4 v)- returns object space direction (not normalized) from given object space vertex position towards the camera.float2 ParallaxOffset (half h, half height, half3 viewDir)- calculates UV offset for parallax normal mapping.fixed Luminance (fixed3 c)- converts color to luminance (grayscale).fixed3 DecodeLightmap (fixed4 color)- decodes color from Unity lightmap (RGBM or dLDR depending on platform).float4 EncodeFloatRGBA (float v)- encodes [0..1) range float into RGBA color, for storage in low precision render target.float DecodeFloatRGBA (float4 enc)- decodes RGBA color into a float.- Similarly,
float2 EncodeFloatRG (float v)andfloat DecodeFloatRG (float2 enc)that use two color channels. float2 EncodeViewNormalStereo (float3 n)- encodes view space normal into two numbers in 0..1 range.float3 DecodeViewNormalStereo (float4 enc4)- decodes view space normal from enc4.xy.
Forward rendering helper functions in UnityCG.cginc
These functions are only useful when using forward rendering (ForwardBase or ForwardAdd pass types).
float3 WorldSpaceLightDir (float4 v)- computes world space direction (not normalized) to light, given object space vertex position.float3 ObjSpaceLightDir (float4 v)- computes object space direction (not normalized) to light, given object space vertex position.float3 Shade4PointLights (...)- computes illumination from four point lights, with light data tightly packed into vectors. Forward rendering uses this to compute per-vertex lighting.
Vertex-lit helper functions in UnityCG.cginc
These functions are only useful when using per-vertex lit shaders ("Vertex" pass type).
float3 ShadeVertexLights (float4 vertex, float3 normal)- computes illumination from four per-vertex lights and ambient, given object space position & normal.
SL-BuiltinMacros
When compiling shader programs, Unity defines several preprocessor macros.
Target platform
SHADER_API_OPENGL- desktop OpenGLSHADER_API_D3D9- Direct3D 9SHADER_API_XBOX360- Xbox 360SHADER_API_PS3- PlayStation 3SHADER_API_D3D11- desktop Direct3D 11SHADER_API_GLES- OpenGL ES 2.0 (desktop or mobile), use presence ofSHADER_API_MOBILEto determine.SHADER_API_FLASH- Flash Stage3DSHADER_API_D3D11_9X- Direct3D 11 target for Windows RT
Additionally, SHADER_TARGET_GLSL is defined when the target shading language is GLSL (always true when SHADER_API_GLES is defined; and can be true for SHADER_API_OPENGL when #pragma glsl is used).
SHADER_API_MOBILE is defined for SHADER_API_GLES when compiling for "mobile" platform (iOS/Android); and not defined when compiling for "desktop" (NativeClient).
Platform difference helpers
Direct use of these platform macros is discouraged, since it's not very future proof. For example, if you're writing a shader that checks for D3D9, then maybe in the future the check should be extended to include D3D11. Instead, Unity defines several helper macros (in HLSLSupport.cginc) to help with that.
UNITY_ATTEN_CHANNEL- which channel of light attenuation texture contains the data; used in per-pixel lighting code. Defined to either 'r' or 'a'.UNITY_HALF_TEXEL_OFFSET- defined on platforms that need a half-texel offset adjustment in mapping texels to pixels (e.g. Direct3D 9).UNITY_UV_STARTS_AT_TOP- always defined with value or 1 or 0; value of one is on platforms where texture V coordinate is zero at "top of the texture". Direct3D-like platforms use value of 1; OpenGL-like platforms use value of 0.UNITY_MIGHT_NOT_HAVE_DEPTH_TEXTURE- defined if a platform might emulate shadow maps or depth textures by manually rendering depth into a texture.UNITY_PROJ_COORD(a)- given a 4-component vector, return a texture coordinate suitable for projected texture reads. On most platforms this returns the given value directly.UNITY_NEAR_CLIP_VALUE- defined to the value of near clipping plane; Direct3D-like platforms use 0.0 while OpenGL-like platforms use -1.0.UNITY_COMPILER_CG,UNITY_COMPILER_HLSLorUNITY_COMPILER_HLSL2GLSLdetermine which underlying shader compiler is used; use in case of subtle syntax differences force you to write different shader code.UNITY_CAN_COMPILE_TESSELLATION- defined when the shader compiler "understands" tessellation shader HLSL syntax (currently only D3D11).UNITY_INITIALIZE_OUTPUT(type,name)- initialize variable name of given type to zero.
Constant buffer macros
Direct3D 11 groups all shader variables into "constant buffers". Most of Unity's built-in variables are already grouped, but for variables in your own shaders it might be more optimal to put them into separate constant buffers depending on expected frequency of updates.
Use CBUFFER_START(name) and CBUFFER_END macros for that:
CBUFFER_START(MyRarelyUpdatedVariables)
float4 _SomeGlobalValue;
CBUFFER_END
Surface shader pass indicators
When Surface Shaders are compiled, they end up generating a lot of code for various passes to do lighting. When compiling each pass, one of the following macros is defined:
UNITY_PASS_FORWARDBASE- forward rendering base pass (main directional light, lightmaps, SH).UNITY_PASS_FORWARDADD- forward rendering additive pass (one light per pass).UNITY_PASS_PREPASSBASE- deferred lighting base pass (renders normals & specular exponent).UNITY_PASS_PREPASSFINAL- deferred lighting final pass (applies lighting & textures).UNITY_PASS_SHADOWCASTER- shadow caster rendering pass.UNITY_PASS_SHADOWCOLLECTOR- shadow "gathering" pass for directional light shadows.
SL-BuiltinStateInPrograms
shader programs ではよくありますが、現在の model*view*projection マトリクスや現在の周囲色などの一部のグローバル状態にアクセスする必要があります。 組み込みの状態に対して、これらの変数を宣言する必要はありません。ただシェーダ プログラムで使用できます。
組み込みマトリクス
以下のマトリクス (float4x4) がサポートされています。
- UNITY_MATRIX_MVP
- 現在のモデル *view* 投影マトリクス
- UNITY_MATRIX_MV
- 現在のモデル *view* マトリクス
- UNITY_MATRIX_P
- 現在の投影マトリクス
- UNITY_MATRIX_T_MV
- モデル *view* マトリクスの転置行列
- UNITY_MATRIX_IT_MV
- モデル *view* マトリクスの逆転値
- UNITY_MATRIX_TEXTURE0 - UNITY_MATRIX_TEXTURE3
- テクスチャ変形マトリクス
組み込みベクトル
以下のベクトル (float4) がサポートされています。
- UNITY_LIGHTMODEL_AMBIENT
- 現在の周辺色。
SL-GLSLShaderPrograms
In addition to using Cg/HSL shader programs, OpenGL Shading Language (GLSL) shaders can be written directly.
However, use of raw GLSL is only recommended for testing, or when you know you will only target Mac OS X or OpenGL ES 2.0 compatible mobile devices. In majority of normal cases, Unity will cross-compile Cg/HLSL into optimized GLSL (this is done by default for mobile platforms, and can be optionally turned on for desktop platforms via #pragma glsl).
GLSL snippets
GLSL program snippets are written between GLSLPROGRAM and ENDGLSL keywords.
In GLSL, all shader function entry points have to be called main(). When Unity loads the GLSL shader, it loads the source once for the vertex program, with VERTEX preprocessor define, and once more for the fragment program, with FRAGMENT preprocessor define. So the way to separate vertex and fragment program parts in GLSL snippet is to surround them with #ifdef VERTEX .. #endif and #ifdef FRAGMENT .. #endif. Each GLSL snippet must contain both a vertex program and a fragment program.
Standard include files match those provided for Cg shaders; they just have .glslinc extension: UnityCG.glslinc.
Vertex shader inputs come from predefined GLSL variables (gl_Vertex, gl_MultiTexCoord0, ...) or are user defined attributes. Usually only the tangent vector needs a user defined attribute:
attribute vec4 Tangent;
Data from vertex to fragment programs is passed through varying variables, for example:
varying vec3 lightDir; // vertex shader computes this, fragment shader uses thisPage last updated: 2012-01-02
SL-Shader
Shader is the root command of a shader file. Each file must define one (and only one) Shader. It specifies how any objects whose material uses this shader are rendered.
Syntax
Shader "name" { [Properties] Subshaders [Fallback] } Defines a shader. It will appear in the material inspector listed under name. Shaders optionally can define a list of properties that show up as material settings. After this comes a list of SubShaders, and optionally a fallback.
Details
Properties
Shaders can have a list of properties. Any properties declared in a shader are shown in the material inspector inside Unity. Typical properties are the object color, textures, or just arbitrary values to be used by the shader.
SubShaders & Fallback
Each shader is comprised of a list of sub-shaders. You must have at least one. When loading a shader, Unity will go through the list of subshaders, and pick the first one that is supported by the end user's machine. If no subshaders are supported, Unity will try to use fallback shader.
Different graphic cards have different capabilities. This raises an eternal issue for game developers; you want your game to look great on the latest hardware, but don't want it to be available only to those 3% of the population. This is where subshaders come in. Create one subshader that has all the fancy graphic effects you can dream of, then add more subshaders for older cards. These subshaders may implement the effect you want in a slower way, or they may choose not to implement some details.
Examples
Here is one of the simplest shaders possible:
// colored vertex lighting
Shader "Simple colored lighting" {
// a single color property
Properties {
_Color ("Main Color", Color) = (1,.5,.5,1)
}
// define one subshader
SubShader {
Pass {
Material {
Diffuse [_Color]
}
Lighting On
}
}
}
This shader defines a color property _Color (that shows up in material inspector as Main Color) with a default value of (1, 0.5, 0.5, 1). Then a single subshader is defined. The subshader consists of one Pass that turns on vertex lighting and sets up basic material for it.
Subsections
- ShaderLab syntax: Properties
- ShaderLab syntax: SubShader
- ShaderLab syntax: Fallback
- ShaderLab 構文: その他のコマンド
SL-Properties
Shaders can define a list of parameters to be set by artists in Unity's material inspector. The Properties block in the shader file defines them.
Syntax
- Properties { Property [Property ...] }
- Defines the property block. Inside braces multiple properties are defined as follows.
- name ("display name", Range (min, max)) = number
- Defines a float property, represented as a slider from min to max in the inspector.
- name ("display name", Color) = (number,number,number,number)
- Defines a color property.
- name ("display name", 2D) = "name" { options }
- Defines a 2D texture property.
- name ("display name", Rect) = "name" { options }
- Defines a rectangle (non power of 2) texture property.
- name ("display name", Cube) = "name" { options }
- Defines a cubemap texture property.
- name ("display name", Float) = number
- Defines a float property.
- name ("display name", Vector) = (number,number,number,number)
- Defines a four component vector property.
Details
Each property inside the shader is referenced by name (in Unity, it's common to start shader property names with underscore). The property will show up in material inspector as display name. For each property a default value is given after equals sign:
- For Range and Float properties it's just a single number.
- For Color and Vector properties it's four numbers in parentheses.
- For texture (2D, Rect, Cube) the default value is either an empty string, or one of builtin default textures: "white", "black", "gray" or "bump".
Later on in the shader, property values are accessed using property name in square brackets: [name].
Example
Properties {
// properties for water shader
_WaveScale ("Wave scale", Range (0.02,0.15)) = 0.07 // sliders
_ReflDistort ("Reflection distort", Range (0,1.5)) = 0.5
_RefrDistort ("Refraction distort", Range (0,1.5)) = 0.4
_RefrColor ("Refraction color", Color) = (.34, .85, .92, 1) // color
_ReflectionTex ("Environment Reflection", 2D) = "" {} // textures
_RefractionTex ("Environment Refraction", 2D) = "" {}
_Fresnel ("Fresnel (A) ", 2D) = "" {}
_BumpMap ("Bumpmap (RGB) ", 2D) = "" {}
}
Texture property options
The options inside curly braces of the texture property are optional. The available options are:
- TexGen texgenmode: Automatic texture coordinate generation mode for this texture. Can be one of ObjectLinear, EyeLinear, SphereMap, CubeReflect, CubeNormal; these correspond directly to OpenGL texgen modes. Note that TexGen is ignored if custom vertex programs are used.
- LightmapMode If given, this texture will be affected by per-renderer lightmap parameters. That is, the texture to use can be not in the material, but taken from the settings of the Renderer instead, see Renderer scripting documentation.
Example
// EyeLinear texgen mode example
Shader "Texgen/Eye Linear" {
Properties {
_MainTex ("Base", 2D) = "white" { TexGen EyeLinear }
}
SubShader {
Pass {
SetTexture [_MainTex] { combine texture }
}
}
}
Page last updated: 2012-02-29
SL-SubShader
Each shader in Unity consists of a list of subshaders. When Unity has to display a mesh, it will find the shader to use, and pick the first subshader that runs on the user's graphics card.
Syntax
- Subshader { [Tags] [CommonState] Passdef [Passdef ...] }
- Defines the subshader as optional tags, common state and a list of pass definitions.
Details
A subshader defines a list of rendering passes and optionally setup any state that is common to all passes. Additionally, subshader specific Tags can be set up.
When Unity chooses which subshader to render with, it renders an object once for each Pass defined (and possibly more due to light interactions). As each render of the object is an expensive operation, you want to define the shader in minimum amount of passes possible. Of course, sometimes on some graphics hardware the needed effect can't be done in a single pass; then you have no choice but to use multiple passes.
Each pass definition can be a regular Pass, a Use Pass or a Grab Pass.
Any statements that are allowed in a Pass definition can also appear in Subshader block. This will make all passes use this "shared" state.
Example
// ...
SubShader {
Pass {
Lighting Off
SetTexture [_MainTex] {}
}
}
// ...
This subshader defines a single Pass that turns off any lighting and just displays a mesh with texture named _MainTex.
Page last updated: 2009-10-19SL-Pass
The Pass block causes the geometry of an object to be rendered once.
Syntax
- Pass { [Name and Tags] [RenderSetup] [TextureSetup] }
- The basic pass command contains an optional list of render setup commands, optionally followed by a list of textures to use.
Name and tags
A Pass can define its Name and arbitrary number of Tags - name/value strings that communicate Pass' intent to the rendering engine.
Render Setup
A pass sets up various states of the graphics hardware, for example should alpha blending be turned on, should fog be used, and so on. The commands are these:
- Material { Material Block }
- Defines a material to use in a vertex lighting pipeline. See material page for details.
- Lighting On | Off
- Turn vertex lighting on or off. See material page for details.
- Cull Back | Front | Off
- Set polygon culling mode.
- ZTest (Less | Greater | LEqual | GEqual | Equal | NotEqual | Always)
- Set depth testing mode.
- ZWrite On | Off
- Set depth writing mode.
- Fog { Fog Block }
- Set fog parameters.
- AlphaTest (Less | Greater | LEqual | GEqual | Equal | NotEqual | Always) CutoffValue
- Turns on alpha testing.
- Blend SourceBlendMode DestBlendMode
- Sets alpha blending mode.
- Color Color value
- Sets color to use if vertex lighting is turned off.
- ColorMask RGB | A | 0 | any combination of R, G, B, A
- Set color writing mask. Writing ColorMask 0 turns off rendering to all color channels.
- Offset OffsetFactor , OffsetUnits
- Set depth offset. Note that this command intentionally only accepts constants (i.e., not shader parameters) as of Unity 3.0.
- SeparateSpecular On | Off
- Turns separate specular color for vertex lighting on or off. See material page for details.
- ColorMaterial AmbientAndDiffuse | Emission
- Uses per-vertex color when computing vertex lighting. See material page for details.
Texture Setup
After the render state setup, you can specify a number of textures and their combining modes to apply using SetTexture commands:
The texture setup configures fixed function multitexturing pipeline, and is ignored if custom fragment shaders are used.
Details
Per-pixel Lighting
The per-pixel lighting pipeline works by rendering objects in multiple passes. Unity renders the object once to get ambient and any vertex lights in. Then it renders each pixel light affecting the object in a separate additive pass. See Render Pipeline for details.
Per-vertex Lighting
Per-vertex lighting is the standard Direct3D/OpenGL lighting model that is computed for each vertex. Lighting on turns it on. Lighting is affected by Material block, ColorMaterial and SeparateSpecular commands. See material page for details.
See Also
There are several special passes available for reusing common functionality or implementing various high-end effects:
- UsePass includes named passes from another shader.
- GrabPass grabs the contents of the screen into a texture, for use in a later pass.
Subsections
- ShaderLab syntax: Color, Material, Lighting
- ShaderLab syntax: Culling & Depth Testing
- ShaderLab syntax: Texture Combiners
- ShaderLab syntax: Fog
- ShaderLab syntax: Alpha testing
- ShaderLab syntax: Blending
- ShaderLab syntax: Pass Tags
- ShaderLab 構文: 名前
- ShaderLab 構文: BindChannels
SL-Material
The material and lighting parameters are used to control the built-in vertex lighting. Vertex lighting is the standard Direct3D/OpenGL lighting model that is computed for each vertex. Lighting on turns it on. Lighting is affected by Material block, ColorMaterial and SeparateSpecular commands.
Per-pixel lights are usually implemented with custom vertex/fragment programs and don't use vertex lighting. For these you don't use any of the commands described here, instead you define your own vertex and fragment programs where you do all lighting, texturing and anything else yourself.

Vertex Coloring & Lighting is the first effect to gets calculated for any rendered geometry. It operates on the vertex level, and calculates the base color that is used before textures are applied.
Syntax
The top level commands control whether to use fixed function lighting or not, and some configuration options. The main setup is in the Material Block, detailed further below.
- Color Color
- Sets the object to a solid color. A color is either four RGBA values in parenthesis, or a color property name in square brackets.
- Material { Material Block }
- The Material block is used to define the material properties of the object.
- Lighting On | Off
- For the settings defined in the Material block to have any effect, you must enable Lighting with the Lighting On command. If lighting is off instead, the color is taken straight from the Color command.
- SeparateSpecular On | Off
- This command makes specular lighting be added to the end of the shader pass, so specular lighting is unaffected by texturing. Only has effect when Lighting On is used.
- ColorMaterial AmbientAndDiffuse | Emission
- makes per-vertex color be used instead of the colors set in the material. AmbientAndDiffuse replaces Ambient and Diffuse values of the material; Emission replaces Emission value of the material.
Material Block
This contains settings for how the material reacts to the light. Any of these properties can be left out, in which case they default to black (i.e. have no effect).
- Diffuse Color
- The diffuse color component. This is an object's base color.
- Ambient Color
- The ambient color component. This is the color the object has when it's hit by the ambient light set in the RenderSettings.
- Specular Color
- The color of the object's specular highlight.
- Shininess Number
- The sharpness of the highlight, between 0 and 1. At 0 you get a huge highlight that looks a lot like diffuse lighting, at 1 you get a tiny speck.
- Emission Color
- The color of the object when it is not hit by any light.
The full color of lights hitting the object is:
Ambient * RenderSettings ambient setting + (Light Color * Diffuse + Light Color * Specular) + Emission
The light parts of the equation (within parenthesis) is repeated for all lights that hit the object.
Typically you want to keep the Diffuse and Ambient colors the same (all builtin Unity shaders do this).
Examples
Always render object in pure red:
Shader "Solid Red" {
SubShader {
Pass { Color (1,0,0,0) }
}
}
Basic Shader that colors the object white and applies vertex lighting:
Shader "VertexLit White" {
SubShader {
Pass {
Material {
Diffuse (1,1,1,1)
Ambient (1,1,1,1)
}
Lighting On
}
}
}
An extended version that adds material color as a property visible in Material Inspector:
Shader "VertexLit Simple" {
Properties {
_Color ("Main Color", COLOR) = (1,1,1,1)
}
SubShader {
Pass {
Material {
Diffuse [_Color]
Ambient [_Color]
}
Lighting On
}
}
}
And finally, a full fledged vertex-lit shader (see also SetTexture reference page):
Shader "VertexLit" {
Properties {
_Color ("Main Color", Color) = (1,1,1,0)
_SpecColor ("Spec Color", Color) = (1,1,1,1)
_Emission ("Emmisive Color", Color) = (0,0,0,0)
_Shininess ("Shininess", Range (0.01, 1)) = 0.7
_MainTex ("Base (RGB)", 2D) = "white" {}
}
SubShader {
Pass {
Material {
Diffuse [_Color]
Ambient [_Color]
Shininess [_Shininess]
Specular [_SpecColor]
Emission [_Emission]
}
Lighting On
SeparateSpecular On
SetTexture [_MainTex] {
Combine texture * primary DOUBLE, texture * primary
}
}
}
}
Page last updated: 2009-07-27
SL-CullAndDepth

Culling is an optimization that does not render polygons facing away from the viewer. All polygons have a front and a back side. Culling makes use of the fact that most objects are closed; if you have a cube, you will never see the sides facing away from you (there is always a side facing you in front of it) so we don't need to draw the sides facing away. Hence the term: Backface culling.
The other feature that makes rendering looks correct is Depth testing. Depth testing makes sure that only the closest surfaces objects are drawn in a scene.
Syntax
- Cull Back | Front | Off
- Controls which sides of polygons should be culled (not drawn)
Back Don't render polygons facing away from the viewer (default).Front Don't render polygons facing towards the viewer. Used for turning objects inside-out.Off Disables culling - all faces are drawn. Used for special effects.
- ZWrite On | Off
- Controls whether pixels from this object are written to the depth buffer (default is On). If you're drawng solid objects, leave this on. If you're drawing semitransparent effects, switch to ZWrite Off. For more details read below.
- ZTest Less | Greater | LEqual | GEqual | Equal | NotEqual | Always
- How should depth testing be performed. Default is LEqual (draw objects in from or at the distance as existing objects; hide objects behind them).
- Offset Factor , Units
- Allows you specify a depth offset with two parameters. factor and units. Factor scales the maximum Z slope, with respect to X or Y of the polygon, and units scale the minimum resolvable depth buffer value. This allows you to force one polygon to be drawn on top of another although they are actually in the same position. For example Offset 0, -1 pulls the polygon closer to the camera ignoring the polygon's slope, whereas Offset -1, -1 will pull the polygon even closer when looking at a grazing angle.
Examples
This object will render only the backfaces of an object:
Shader "Show Insides" {
SubShader {
Pass {
Material {
Diffuse (1,1,1,1)
}
Lighting On
Cull Front
}
}
}
Try to apply it to a cube, and notice how the geometry feels all wrong when you orbit around it. This is because you're only seeing the inside parts of the cube.
Transparent shader with depth writes
Usually semitransparent shaders do not write into the depth buffer. However, this can create draw order problems, especially with complex non-convex meshes. If you want to fade in & out meshes like that, then using a shader that fills in the depth buffer before rendering transparency might be useful.
Semitransparent object; left: standard Transparent/Diffuse shader; right: shader that writes to depth buffer.
Shader "Transparent/Diffuse ZWrite" {
Properties {
_Color ("Main Color", Color) = (1,1,1,1)
_MainTex ("Base (RGB) Trans (A)", 2D) = "white" {}
}
SubShader {
Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"}
LOD 200
// extra pass that renders to depth buffer only
Pass {
ZWrite On
ColorMask 0
}
// paste in forward rendering passes from Transparent/Diffuse
UsePass "Transparent/Diffuse/FORWARD"
}
Fallback "Transparent/VertexLit"
}
Debugging Normals
The next one is more interesting; first we render the object with normal vertex lighting, then we render the backfaces in bright pink. This has the effects of highlighting anywhere your normals need to be flipped. If you see physically-controlled objects getting 'sucked in' by any meshes, try to assign this shader to them. If any pink parts are visible, these parts will pull in anything unfortunate enough to touch it.
Here we go:
Shader "Reveal Backfaces" {
Properties {
_MainTex ("Base (RGB)", 2D) = "white" { }
}
SubShader {
// Render the front-facing parts of the object.
// We use a simple white material, and apply the main texture.
Pass {
Material {
Diffuse (1,1,1,1)
}
Lighting On
SetTexture [_MainTex] {
Combine Primary * Texture
}
}
// Now we render the back-facing triangles in the most
// irritating color in the world: BRIGHT PINK!
Pass {
Color (1,0,1,1)
Cull Front
}
}
}
Glass Culling
Controlling Culling is useful for more than debugging backfaces. If you have transparent objects, you quite often want to show the backfacing side of an object. If you render without any culling (Cull Off), you'll most likely have some rear faces overlapping some of the front faces.
Here is a simple shader that will work for convex objects (spheres, cubes, car windscreens).
Shader "Simple Glass" {
Properties {
_Color ("Main Color", Color) = (1,1,1,0)
_SpecColor ("Spec Color", Color) = (1,1,1,1)
_Emission ("Emmisive Color", Color) = (0,0,0,0)
_Shininess ("Shininess", Range (0.01, 1)) = 0.7
_MainTex ("Base (RGB)", 2D) = "white" { }
}
SubShader {
// We use the material in many passes by defining them in the subshader.
// Anything defined here becomes default values for all contained passes.
Material {
Diffuse [_Color]
Ambient [_Color]
Shininess [_Shininess]
Specular [_SpecColor]
Emission [_Emission]
}
Lighting On
SeparateSpecular On
// Set up alpha blending
Blend SrcAlpha OneMinusSrcAlpha
// Render the back facing parts of the object.
// If the object is convex, these will always be further away
// than the front-faces.
Pass {
Cull Front
SetTexture [_MainTex] {
Combine Primary * Texture
}
}
// Render the parts of the object facing us.
// If the object is convex, these will be closer than the
// back-faces.
Pass {
Cull Back
SetTexture [_MainTex] {
Combine Primary * Texture
}
}
}
}
Page last updated: 2012-09-05
SL-SetTexture
After the basic vertex lighting has been calculated, textures are applied. In ShaderLab this is done using SetTexture command.
SetTexture commands have no effect when fragment programs are used; as in that case pixel operations are completely described in the shader.

Texturing is the place to do old-style combiner effects. You can have multiple SetTexture commands inside a pass - all textures are applied in sequence, like layers in a painting program. SetTexture commands must be placed at the end of a Pass.
Syntax
- SetTexture [TexturePropertyName] { Texture Block }
- Assigns a texture. TextureName must be defined as a texture property. How to apply the texture is defined inside the TextureBlock.
The texture block controls how the texture is applied. Inside the texture block can be up to three commands: combine, matrix and constantColor.
Texture block combine command
combinesrc1 * src2- Multiplies src1 and src2 together. The result will be darker than either input.
combinesrc1 + src2- Adds src1 and src2 together. The result will be lighter than either input.
combinesrc1 - src2- Subtracts src2 from src1.
combinesrc1 +- src2- Adds src1 to src2, then subtracts 0.5 (a signed add).
combinesrc1lerp(src2) src3- Interpolates between src3 and src1, using the alpha of src2. Note that the interpolation is opposite direction: src1 is used when alpha is one, and src3 is used when alpha is zero.
combinesrc1 * src2 + src3- Multiplies src1 with the alpha component of src2, then adds src3.
combinesrc1 * src2 +- src3- Multiplies src1 with the alpha component of src2, then does a signed add with src3.
combinesrc1 * src2 - src3- Multiplies src1 with the alpha component of src2, then subtracts src3.
All the src properties can be either one of previous, constant, primary or texture.
- Previous is the the result of the previous SetTexture.
- Primary is the color from the lighting calculation or the vertex color if it is bound.
- Texture is the color of the texture specified by [_TextureName] in the SetTexture (see above).
- Constant is the color specified in ConstantColor.
- The formulas specified above can optionally be followed by the keywords Double or Quad to make the resulting color 2x or 4x as bright.
- All the src properties, except
lerpargument, can optionally be preceded by one - to make the resulting color negated. - All the src properties can be followed by alpha to take only the alpha channel.
Texture block constantColor command
- ConstantColor color
- Defines a constant color that can be used in the combine command.
Texture block matrix command
- matrix [MatrixPropertyName]
- Transforms texture coordinates used in this command with the given matrix.
Details
Before fragment programs existed, older graphics cards used a layered approach to textures. The textures are applied one after each other, modifying the color that will be written to the screen. For each texture, the texture is typically combined with the result of the previous operation.

Note that on "true fixed function" devices (OpenGL, OpenGL ES 1.1, Wii) the value of each SetTexture stage is clamped to 0..1 range. Everywhere else (Direct3D, OpenGL ES 2.0) the range may or may not be higher. This might affect SetTexture stages that can produce values higher than 1.0.
Separate Alpha & Color computation
By default, the combiner formula is used for calculating both the RGB and alpha component of the color. Optionally, you can specify a separate formula for the alpha calculation. This looks like this:
SetTexture [_MainTex] { combine previous * texture, previous + texture }
Here, we multiply the RGB colors and add the alpha.
Specular highlights
By default the primary color is the sum of the diffuse, ambient and specular colors (as defined in the Lighting calculation). If you specify SeparateSpecular On in the pass options, the specular color will be added in after the combiner calculation, rather than before. This is the default behavior of the built-in VertexLit shader.
Graphics hardware support
Modern graphics cards with fragment shader support ("shader model 2.0" on desktop, OpenGL ES 2.0 on mobile) support all SetTexture modes and at least 4 texture stages (many of them support 8). If you're running on really old hardware (made before 2003 on PC, or before iPhone3GS on mobile), you might have as low as two texture stages. The shader author should write separate SubShaders for the cards he or she wants to support.
Examples
Alpha Blending Two Textures
This small examples takes two textures. First it sets the first combiner to just take the _MainTex, then is uses the alpha channel of _BlendTex to fade in the RGB colors of _BlendTex
Shader "Examples/2 Alpha Blended Textures" {
Properties {
_MainTex ("Base (RGB)", 2D) = "white" {}
_BlendTex ("Alpha Blended (RGBA) ", 2D) = "white" {}
}
SubShader {
Pass {
// Apply base texture
SetTexture [_MainTex] {
combine texture
}
// Blend in the alpha texture using the lerp operator
SetTexture [_BlendTex] {
combine texture lerp (texture) previous
}
}
}
}
Alpha Controlled Self-illumination
This shader uses the alpha component of the _MainTex to decide where to apply lighting. It does this by applying the texture to two stages; In the first stage, the alpha value of the texture is used to blend between the vertex color and solid white. In the second stage, the RGB values of the texture are multiplied in.
Shader "Examples/Self-Illumination" {
Properties {
_MainTex ("Base (RGB) Self-Illumination (A)", 2D) = "white" {}
}
SubShader {
Pass {
// Set up basic white vertex lighting
Material {
Diffuse (1,1,1,1)
Ambient (1,1,1,1)
}
Lighting On
// Use texture alpha to blend up to white (= full illumination)
SetTexture [_MainTex] {
constantColor (1,1,1,1)
combine constant lerp(texture) previous
}
// Multiply in texture
SetTexture [_MainTex] {
combine previous * texture
}
}
}
}
We can do something else for free here, though; instead of blending to solid white, we can add a self-illumination color and blend to that. Note the use of ConstantColor to get a _SolidColor from the properties into the texture blending.
Shader "Examples/Self-Illumination 2" {
Properties {
_IlluminCol ("Self-Illumination color (RGB)", Color) = (1,1,1,1)
_MainTex ("Base (RGB) Self-Illumination (A)", 2D) = "white" {}
}
SubShader {
Pass {
// Set up basic white vertex lighting
Material {
Diffuse (1,1,1,1)
Ambient (1,1,1,1)
}
Lighting On
// Use texture alpha to blend up to white (= full illumination)
SetTexture [_MainTex] {
// Pull the color property into this blender
constantColor [_IlluminCol]
// And use the texture's alpha to blend between it and
// vertex color
combine constant lerp(texture) previous
}
// Multiply in texture
SetTexture [_MainTex] {
combine previous * texture
}
}
}
}
And finally, we take all the lighting properties of the vertexlit shader and pull that in:
Shader "Examples/Self-Illumination 3" {
Properties {
_IlluminCol ("Self-Illumination color (RGB)", Color) = (1,1,1,1)
_Color ("Main Color", Color) = (1,1,1,0)
_SpecColor ("Spec Color", Color) = (1,1,1,1)
_Emission ("Emmisive Color", Color) = (0,0,0,0)
_Shininess ("Shininess", Range (0.01, 1)) = 0.7
_MainTex ("Base (RGB)", 2D) = "white" {}
}
SubShader {
Pass {
// Set up basic vertex lighting
Material {
Diffuse [_Color]
Ambient [_Color]
Shininess [_Shininess]
Specular [_SpecColor]
Emission [_Emission]
}
Lighting On
// Use texture alpha to blend up to white (= full illumination)
SetTexture [_MainTex] {
constantColor [_IlluminCol]
combine constant lerp(texture) previous
}
// Multiply in texture
SetTexture [_MainTex] {
combine previous * texture
}
}
}
}
Page last updated: 2012-08-17
SL-Fog
Fog parameters are controlled with Fog command.

Fogging blends the color of the generated pixels down towards a constant color based on distance from camera. Fogging does not modify a blended pixel's alpha value, only its RGB components.
Syntax
- Fog { Fog Commands }
- Specify fog commands inside curly braces.
- Mode Off | Global | Linear | Exp | Exp2
- Defines fog mode. Default is global, which translates to Off or Exp2 depending whether fog is turned on in Render Settings.
- Color ColorValue
- Sets fog color.
- Density FloatValue
- Sets density for exponential fog.
- Range FloatValue , FloatValue
- Sets near & far range for linear fog.
Details
Default fog settings are based on Render Settings: fog mode is either Exp2 or Off; density & color taken from settings as well.
Note that if you use fragment programs, Fog settings of the shader will still be applied. On platforms where there is no fixed function Fog functionality, Unity will patch shaders at runtime to support the requested Fog mode.
Page last updated: 2010-08-18SL-AlphaTest
The alpha test is a last chance to reject a pixel from being written to the screen.

After the final output color has been calculated, the color can optionally have its alpha value compared to a fixed value. If the test fails, the pixel is not written to the display.
Syntax
- AlphaTest Off
- Render all pixels (default).
- AlphaTest comparison AlphaValue
- Set up the alpha test to only render pixels whose alpha value is within a certain range.
Comparison
Comparison is one of the following words:
| Greater | Only render pixels whose alpha is greater than AlphaValue. |
| GEqual | Only render pixels whose alpha is greater than or equal to AlphaValue. |
| Less | Only render pixels whose alpha value is less than AlphaValue. |
| LEqual | Only render pixels whose alpha value is less than or equal to from AlphaValue. |
| Equal | Only render pixels whose alpha value equals AlphaValue. |
| NotEqual | Only render pixels whose alpha value differs from AlphaValue. |
| Always | Render all pixels. This is functionally equivalent to AlphaTest Off. |
| Never | Don't render any pixels. |
AlphaValue
A floating-point number between 0 and 1. This can also be a variable reference to a float or range property, in which case it should be written using the standard square bracket notation ([VariableName]).
Details
The alpha test is important when rendering concave objects with transparent parts. The graphics card maintains a record of the depth of every pixel written to the screen. If a new pixel is further away than one already rendered, the new pixel is not written to the display. This means that even with Blending, objects will not show through.

In this figure, the tree on the left is rendered using AlphaTest. Note how the pixels in it are either completely transparent or opaque. The center tree is rendered using only Alpha Blending - notice how transparent parts of nearby branches cover the distant leaves because of the depth buffer. The tree on the right is rendered using the last example shader - which implements a combination of blending and alpha testing to hide any artifacts.
Examples
The simplest possible example, assign a texture with an alpha channel to it. The object will only be visible where alpha is greater than 0.5
Shader "Simple Alpha Test" {
Properties {
_MainTex ("Base (RGB) Transparency (A)", 2D) = "" {}
}
SubShader {
Pass {
// Only render pixels with an alpha larger than 50%
AlphaTest Greater 0.5
SetTexture [_MainTex] { combine texture }
}
}
}
This is not much good by itself. Let us add some lighting and make the cutoff value tweakable:
Shader "Cutoff Alpha" {
Properties {
_MainTex ("Base (RGB) Transparency (A)", 2D) = "" {}
_Cutoff ("Alpha cutoff", Range (0,1)) = 0.5
}
SubShader {
Pass {
// Use the Cutoff parameter defined above to determine
// what to render.
AlphaTest Greater [_Cutoff]
Material {
Diffuse (1,1,1,1)
Ambient (1,1,1,1)
}
Lighting On
SetTexture [_MainTex] { combine texture * primary }
}
}
}
When rendering plants and trees, many games have the hard edges typical of alpha testing. A way around that is to render the object twice. In the first pass, we use alpha testing to only render pixels that are more than 50% opaque. In the second pass, we alpha-blend the graphic in the parts that were cut away, without recording the depth of the pixel. We might get a bit of confusion as further away branches overwrite the nearby ones, but in practice, that is hard to see as leaves have a lot of visual detail in them.
Shader "Vegetation" {
Properties {
_Color ("Main Color", Color) = (.5, .5, .5, .5)
_MainTex ("Base (RGB) Alpha (A)", 2D) = "white" {}
_Cutoff ("Base Alpha cutoff", Range (0,.9)) = .5
}
SubShader {
// Set up basic lighting
Material {
Diffuse [_Color]
Ambient [_Color]
}
Lighting On
// Render both front and back facing polygons.
Cull Off
// first pass:
// render any pixels that are more than [_Cutoff] opaque
Pass {
AlphaTest Greater [_Cutoff]
SetTexture [_MainTex] {
combine texture * primary, texture
}
}
// Second pass:
// render in the semitransparent details.
Pass {
// Dont write to the depth buffer
ZWrite off
// Don't write pixels we have already written.
ZTest Less
// Only render pixels less or equal to the value
AlphaTest LEqual [_Cutoff]
// Set up alpha blending
Blend SrcAlpha OneMinusSrcAlpha
SetTexture [_MainTex] {
combine texture * primary, texture
}
}
}
}
Note that we have some setup inside the SubShader, rather than in the individual passes. Any state set in the SubShader is inherited as defaults in passes inside it.
Page last updated: 2008-04-27SL-Blend
Blending is used to make transparent objects.

When graphics are rendered, after all shaders have executed and all textures have been applied, the pixels are written to the screen. How they are combined with what is already there is controlled by the Blend command.
Syntax
- Blend Off
- Turn off blending
- Blend SrcFactor DstFactor
- Configure & enable blending. The generated color is multiplied by the SrcFactor. The color already on screen is multiplied by DstFactor and the two are added together.
- Blend SrcFactor DstFactor, SrcFactorA DstFactorA
- Same as above, but use different factors for blending the alpha channel.
- BlendOp Min | Max | Sub | RevSub
- Instead of adding blended colors together, do a different operation on them.
Properties
All following properties are valid for both SrcFactor & DstFactor. Source refers to the calculated color, Destination is the color already on the screen.
| One | The value of one - use this to let either the source or the destination color come through fully. |
| Zero | The value zero - use this to remove either the source or the destination values. |
| SrcColor | The value of this stage is multiplied by the source color value. |
| SrcAlpha | The value of this stage is multiplied by the source alpha value. |
| DstColor | The value of this stage is multiplied by frame buffer source color value. |
| DstAlpha | The value of this stage is multiplied by frame buffer source alpha value. |
| OneMinusSrcColor | The value of this stage is multiplied by (1 - source color). |
| OneMinusSrcAlpha | The value of this stage is multiplied by (1 - source alpha). |
| OneMinusDstColor | The value of this stage is multiplied by (1 - destination color). |
| OneMinusDstAlpha | The value of this stage is multiplied by (1 - destination alpha). |
Details
Below are the most common blend types:
Blend SrcAlpha OneMinusSrcAlpha // Alpha blending Blend One One // Additive Blend OneMinusDstColor One // Soft Additive Blend DstColor Zero // Multiplicative Blend DstColor SrcColor // 2x Multiplicative
Example
Here is a small example shader that adds a texture to whatever is on the screen already:
Shader "Simple Additive" {
Properties {
_MainTex ("Texture to blend", 2D) = "black" {}
}
SubShader {
Tags { "Queue" = "Transparent" }
Pass {
Blend One One
SetTexture [_MainTex] { combine texture }
}
}
}
And a more complex one, Glass. This is a two-pass shader:
- The first pass renders a lit, alpha-blended texture on to the screen. The alpha channel decides the transparency.
- The second pass renders a reflection cubemap on top of the alpha-blended window, using additive transparency.
Shader "Glass" {
Properties {
_Color ("Main Color", Color) = (1,1,1,1)
_MainTex ("Base (RGB) Transparency (A)", 2D) = "white" {}
_Reflections ("Base (RGB) Gloss (A)", Cube) = "skybox" { TexGen CubeReflect }
}
SubShader {
Tags { "Queue" = "Transparent" }
Pass {
Blend SrcAlpha OneMinusSrcAlpha
Material {
Diffuse [_Color]
}
Lighting On
SetTexture [_MainTex] {
combine texture * primary double, texture * primary
}
}
Pass {
Blend One One
Material {
Diffuse [_Color]
}
Lighting On
SetTexture [_Reflections] {
combine texture
Matrix [_Reflection]
}
}
}
}
Page last updated: 2012-05-31
SL-PassTags
Passes use tags to tell how and when they expect to be rendered to the rendering engine.
Syntax
- Tags { "TagName1" = "Value1" "TagName2" = "Value2" }
- Specifies TagName1 to have Value1, TagName2 to have Value2. You can have as many tags as you like.
Details
Tags are basically key-value pairs. Inside a Pass tags are used to control which role this pass has in the lighting pipeline (ambient, vertex lit, pixel lit etc.) and some other options. Note that the following tags recognized by Unity 'must be inside Pass section and not inside SubShader!
LightMode tag
LightMode tag defines Pass' role in the lighting pipeline. See render pipeline for details. These tags are rarely used manually; most often shaders that need to interact with lighting are written as Surface Shaders and then all those details are taken care of.
Possible values for LightMode tag are:
- Always: Always rendered; no lighting is applied.
- ForwardBase: Used in Forward rendering, ambient, main directional light and vertex/SH lights are applied.
- ForwardAdd: Used in Forward rendering; additive per-pixel lights are applied, one pass per light.
- PrepassBase: Used in Deferred Lighting, renders normals & specular exponent.
- PrepassFinal: Used in Deferred Lighting, renders final color by combining textures, lighting & emission.
- Vertex: Used in Vertex Lit rendering when object is not lightmapped; all vertex lights are applied.
- VertexLMRGBM: Used in Vertex Lit rendering when object is lightmapped; on platforms where lightmap is RGBM encoded.
- VertexLM: Used in Vertex Lit rendering when object is lightmapped; on platforms where lightmap is double-LDR encoded (generally mobile platforms & old dekstop GPUs).
- ShadowCaster: Renders object as shadow caster.
- ShadowCollector: Gathers object's shadows into screen-space buffer for Forward rendering path.
RequireOptions tag
A pass can indicate that it should only be rendered when some external conditions are met. This is done by using RequireOptions tag, whose value is a string of space separated options. Currently the options supported by Unity are:
- SoftVegetation: Render this pass only if Soft Vegetation is on in Quality Settings.
See Also
SubShaders can be given Tags as well, see SubShader Tags.
Page last updated: 2012-01-27SL-Name
構文
- Name "PassName"
- 現在のパスにPassNameの名を与えます。
詳細
UsePass コマンドが参照できるよう、パスに名前を与えることができます。
Page last updated: 2012-11-13SL-BindChannels
BindChannels コマンドにより、頂点データがグラフィック ハードウェアにどのようにマッピングされるかを指定できます。
BindChannels は、プログラム可能な頂点シェーダの使用タイミングには影響しません。その場合、バインディングは頂点 シェーダ入力で制御されます。
デフォルトでは、Unity がバインディングを算定しますが、カスタムのバインディングを使用したい場合もあるでしょう。
例えば、1 つ目のテクスチャ ステージで使用されるよう設定された 1 つ目の UV と 2 つ目のテクスチャ ステージで使用されるように設定された 2 つ目の UV をマッピングするか、頂点色を考慮するようハードウェアに指示できます。
構文
- BindChannels { Bind "source", target }
- ハードウェアのターゲットに頂点データのソースマップを指定します。
ソースは、次のうちのいずれかになります。
- Vertex: 頂点の位置
- Normal: 頂点の法線
- Tangent: 頂点の接線
- Texcoord: 1 つ目の UV 座標
- Texcoord1: 2 つ目の UV 座標
- Color: 頂点ごとの色
ターゲットは、次のうちのいずれかになります。
- Vertex: 頂点の位置
- Normal: 頂点の法線
- Tangent: 頂点の接線
- Texcoord0, Texcoord1, ...: 対応するテクスチャ ステージに対するテクスチャ座標
- Texcoord: すべてのテクスチャ ステージに対するテクスチャ座標
- Color: 頂点の色
詳細
Unity には、ターゲットと、それにマッピングされるソースに関して厳しい制約があります。 ソースと目標は、Vertex、Normal、TangentおよびColorに対して一致する必要があります。 メッシュからのテクスチャ (TexcoordおよびTexcoord1) はテクスチャ座標のターゲット (すべてのテクスチャ ステージに対してはTexcoord、指定したステージに対して、TexcoordN) にマッピングできます。
BindChannels に対して、通常、次の 2 つの使用事例があります。
- 頂点色を考慮するシェーダ。
- 2 つの UV セットを使用するシェーダ。
例
// 1 つ目の UV を 1 つ目のテクスチャ ステージに設定させます。
[@// 2 つ目の UV を 2 つ目のテクスチャ ステージに設定させます。
BindChannels {
Bind "Vertex", vertex
Bind "texcoord", texcoord0
Bind "texcoord1", texcoord1
}
// 1 つ目の UV すべてのテクスチャ ステージに設定させます。
// 頂点色を使用します。
BindChannels {
Bind "Vertex", vertex
Bind "texcoord", texcoord
Bind "Color", color
}
Page last updated: 2012-11-13
SL-UsePass
UsePass コマンドは、別のシェーダからの名前付きパスを使用します。
構文
UsePass "Shader/Name"
所定のシェーダからの所定の名前のあるすべてのパスを挿入します。 Shader/Nameには、スラッシュで区切られたシェーダの名前とパスの名前が含まれます。注意:一番最初に出てきた subshader だけ考慮されます。
詳細
シェーダの一部は、古いシェーダからの既存のパスを最利用するため、コードの重複がヘリます。 例えば、ほとんどのピクセル ライティング シェーダでは、周辺または頂点ライティングパスは、対応する VertexLit シェーダ内と同じになります。 UsePass コマンドはその通りにします。このコマンドは、別のシェーダからの所定のパスを含みます。 例として、次のコマンドは、組み込みのスペキュラシェーダからの名前BASE を持つパスを使用します。
UsePass "Specular/BASE"
UsePass を機能させるには、使用したいパスに名前を与えます。 パス内の Name コマンドが名前を与えます。
Name "MyPassName"
内部では、すべてのパス名が大文字になるため、UsePass は名前を大文字で参照する必要があります。
Page last updated: 2012-11-13SL-GrabPass
GrabPass is a special passtype - it grabs the contents of the screen where the object is about to be drawn into a texture. This texture can be used in subsequent passes to do advanced image based effects.
Syntax
The GrabPass belongs inside a subshader. It can take two forms:
- Just
GrabPass { }will grab current screen contents into a texture. The texture can be accessed in further passes by_GrabTexturename. Note: this form of grab pass will do the expensive screen grabbing operation for each object that uses it! GrabPass { "TextureName" }will grab screen contents into a texture, but will only do that once per frame for the first object that uses the given texture name. The texture can be accessed in further passes by the given texture name. This is a more performant way when you have multiple objects using grab pass in the scene.
Additionally, GrabPass can use Name and Tags commands.
Example
Here is an expensive way to invert the colors of what was rendered before:
Shader "GrabPassInvert" {
SubShader {
// Draw ourselves after all opaque geometry
Tags { "Queue" = "Transparent" }
// Grab the screen behind the object into _GrabTexture
GrabPass { }
// Render the object with the texture generated above, and invert it's colors
Pass {
SetTexture [_GrabTexture] { combine one-texture }
}
}
}
This shader has two passes: First pass grabs whatever is behind the object at the time of rendering, then applies that in the second pass. Now of course, the same effect could be achieved much cheaper using an invert blend mode.
See Also
Page last updated: 2012-07-10SL-SubshaderTags
Subshaders use tags to tell how and when they expect to be rendered to the rendering engine.
Syntax
- Tags { "TagName1" = "Value1" "TagName2" = "Value2" }
- Specifies TagName1 to have Value1, TagName2 to have Value2. You can have as many tags as you like.
Details
Tags are basically key-value pairs. Inside a SubShader tags are used to determine rendering order and other parameters of a subshader. Note that the following tags recognized by Unity must be inside SubShader section and not inside Pass!
Rendering Order - Queue tag
You can determine in which order your objects are drawn using the Queue tag. A Shader decides which render queue its objects belong to, this way any Transparent shaders make sure they are drawn after all opaque objects and so on.
There are four pre-defined render queues, but there can be more queues in between the predefined ones. The predefined queues are:
- Background - this render queue is rendered before any others. It is used for skyboxes and the like.
- Geometry (default) - this is used for most objects. Opaque geometry uses this queue.
- AlphaTest - alpha tested geometry uses this queue. It's a separate queue from Geometry one since it's more efficient to render alpha-tested objects after all solid ones are drawn.
- Transparent - this render queue is rendered after Geometry and AlphaTest, in back-to-front order. Anything alpha-blended (i.e. shaders that don't write to depth buffer) should go here (glass, particle effects).
- Overlay - this render queue is meant for overlay effects. Anything rendered last should go here (e.g. lens flares).
Shader "Transparent Queue Example" {
SubShader {
Tags {"Queue" = "Transparent" }
Pass {
// rest of the shader body...
}
}
}
An example illustrating how to render something in the transparent queue
Geometry render queue optimizes the drawing order of the objects for best performance. All other render queues sort objects by distance, starting rendering from the furthest ones and ending with the closest ones.
For special uses in-between queues can be used. Internally each queue is represented by integer index; Background is 1000, Geometry is 2000, AlphaTest is 2450, Transparent is 3000 and Overlay is 4000. If a shader uses a queue like this:
Tags { "Queue" = "Geometry+1" }
This will make the object be rendered after all opaque objects, but before transparent objects, as render queue index will be 2001 (geometry plus one). This is useful in situations where you want some objects be always drawn between other sets of objects. For example, in most cases transparent water should be drawn after opaque objects but before transparent objects.
RenderType tag
RenderType tag categorizes shaders into several predefined groups, e.g. is is an opaque shader, or an alpha-tested shader etc. This is used by Shader Replacement and in some cases used to produce camera's depth texture.
IgnoreProjector tag
If IgnoreProjector tag is given and has a value of "True", then an object that uses this shader will not be affected by Projectors. This is mostly useful on semitransparent objects, because there is no good way for Projectors to affect them.
See Also
Passes can be given Tags as well, see Pass Tags.
Page last updated: 2012-06-21SL-Fallback
After all Subshaders a Fallback can be defined. It basically says "if none of subshaders can run on this hardware, try using the ones from another shader".
Syntax
- Fallback "name"
- Fallback to shader with a given name.
- Fallback Off
- Explicitly state that there is no fallback and no warning should be printed, even if no subshaders can run on this hardware.
Details
A fallback statement has the same effect as if all subshaders from the other shader would be inserted into its place.
Example
Shader "example" {
// properties and subshaders here...
Fallback "otherexample"
}
Page last updated: 2008-04-28
SL-Other
カテゴリ
カテゴリ は、下のコマンドの論理的グルーピングです。 これは、多くの場合、レンダリング状態を継承するのに使用されます。 例えば、シェーダに複数の subshaders がある場合、それぞれが、fog をオフににし、blending を追加に指定する必要があるとします。カテゴリを使ってこれを行うことができます。
Shader "example" {
Category {
Fog { Mode Off }
Blend One One
SubShader {
// ...
}
SubShader {
// ...
}
// ...
}
}
カテゴリ ブロックは、シェーダの構文解析にのみ影響しますが、下のすべてのブロックにカテゴリ内で設定した状態を構文解析するのとまったく同じです。 シェーダの実行速度には一切影響しません。
Page last updated: 2012-11-13SL-AdvancedTopics
ShaderLab を学びたい場合はお読みください。
- Unity のレンダリング パイプライン
- Performance Tips when Writing Shaders
- Rendering with Replaced Shaders
- Using Depth Textures
- Camera's Depth Texture
- Platform Specific Rendering Differences
- Shader Level of Detail
SL-RenderPipeline
シェーダは、オブジェクト自体の見た目 (そのマテリアル プロパティ) およびライトにどのように反応するかを定義します。 ライティングの計算をシェーダに作成する必要があるため、また多くのライトやシャドウのタイプがあるため、機能するだけの質の高いシェーダを記述することは、複雑な作業です。 これを簡単にするため、Unity 3 では、Surface Shaders を導入しています。ここでは、すべてのライティング、シャドーイング、ライトマッピング、フォワード対遅延ライティングが自動で配慮されます。
本書では、Unity のライティングおよびレンダリング パイプラインの特徴および Surface Shaders のシーン裏で起きていることについて説明します。
レンダリング パス
ライティングがどのように適用されるか、およびシェーダのどの Passes が使用されるかは、どの Rendering Path が使用されるかで決まります。 シェーダ内の各パスは、Pass Tags を介して、そのライティング タイプと通信します。
- Deferred Lighting では、
PrepassBaseandPrepassFinalパスが使用されます。 - Forward Rendering では、
ForwardBaseandForwardAddパスが使用されます。 - Vertex Lit では、
Vertex,VertexLMRGBMandVertexLMパスが使用されます。 - 上のどの場合でも、 Shadows をレンダリングするために、
ShadowCasterとShadowCollectorパスが使われます。
遅延ライティング パス
PrepassBase パスは、法線およびスペキュラ指数をレンダリングします。PrepassFinal パスは、テクスチャ、ライティングおよび放出マテリアル プロパティを結合することで最終的な色をレンダリングします。 通常のシーン内のライティングはすべて、シーン空間で個別に行われます。 詳細については、Deferred Lighting を参照してください。
フォワード レンダリング パス
ForwardBase パスは、周囲、ライトマップ、メインのディレクショナル ライトおよび重要でない (頂点/SH) ライトを一度にレンダリングします。 ForwardAdd パスは、追加の頂点ごとのライトに使用されます。このようなライトによって照らされるオブジェクトごとに 1 回の呼び出しが行われます。 詳細については、Forward Rendering を参照してください。
もしフォワードレンダリングが使われるならば、シェーダーはforward-suitableパス( ForwardBase や ForwardAdd のどちらも)を持ちませんが、そのオブジェクトは頂点リットパスでレンダリングされます。詳しくは下の項目を見て下さい。
頂点リット レンダリング パス細部
頂点ライティングは、プログラム可能なシェーダをサポートしていないプラットフォームで頻繁に使用されるため、Unity は、ライトマップ使用時対未使用時の場合を処理するために内部で複数のシェーダ順列を作成できます。 そのため、ライトマップ使用時対未使用オブジェクトを処理するため、複数のパスを明示的に記述する必要があります。
Vertexパスは、ライトマップが使用されていないオブジェクトに使用されます。 すべてのライトは、固定関数 OpenGL/Direct3D ライティング モデル (Blinn-Phong) を使用して、一度にレンダリングされます。VertexLMRGBMパスは、ライトマップが RGBM 符号化される場合に、ライトマップ使用オブジェクトに使用されます (これは、ほとんどのデスクトップおよびコンソールで行われます)。 リアルタイム ライティングは適用されません。パスはテクスチャとライトマップの結合に使用されます。VertexLMMパスは、ライトマップが 2 重 LDR 符号化される場合に、ライトマップ使用オブジェクトに使用されます (これは、携帯機器および古いデスクトップで行われます)。 リアルタイム ライティングは適用されません。パスはテクスチャとライトマップの結合に使用されます。
SL-ShaderPerformance
Use Common sense ;)
Compute only things that you need; anything that is not actually needed can be eliminated. For example, supporting per-material color is nice to make a shader more flexible, but if you always leave that color set to white then it's useless computations performed for each vertex or pixel rendered on screen.
Another thing to keep in mind is frequency of computations. Usually there are many more pixels rendered (hence their pixel shaders executed) than there are vertices (vertex shader executions); and more vertices than objects being rendered. So generally if you can, move computations out of pixel shader into the vertex shader; or out of shaders completely and set the values once from a script.
Less Generic Surface Shaders
Surface Shaders are great for writing shaders that interact with lighting. However, their default options are tuned for "general case". In many cases, you can tweak them to make shaders run faster or at least be smaller:
approxviewdirective for shaders that use view direction (i.e. Specular) will make view direction be normalized per-vertex instead of per-pixel. This is approximate, but often good enough.halfasviewfor Specular shader types is even faster. Half-vector (halfway between lighting direction and view vector) will be computed and normalized per vertex, and lighting function will already receive half-vector as a parameter instead of view vector.noforwardaddwill make a shader fully support only one directional light in Forward rendering. The rest of the lights can still have an effect as per-vertex lights or spherical harmonics. This is great to make shader smaller and make sure it always renders in one pass, even with multiple lights present.noambientwill disable ambient lighting and spherical harmonics lights on a shader. This can be slightly faster.
Precision of computations
When writing shaders in Cg/HLSL, there are three basic number types: float, half and fixed (as well as vector & matrix variants of them, e.g. half3 and float4x4):
float: high precision floating point. Generally 32 bits, just like float type in regular programming languages.half: medium precision floating point. Generally 16 bits, with a range of -60000 to +60000 and 3.3 decimal digits of precision.fixed: low precision fixed point. Generally 11 bits, with a range of -2.0 to +2.0 and 1/256th precision.
Use lowest precision that is possible; this is especially important on mobile platforms like iOS and Android. Good rules of thumb are:
- For colors and unit length vectors, use
fixed. - For others, use
halfif range and precision is fine; otherwise usefloat.
On mobile platforms, the key is to ensure as much as possible stays in low precision in the fragment shader. On most mobile GPUs, applying swizzles to low precision (fixed/lowp) types is costly; converting between fixed/lowp and higher precision types is quite costly as well.
Alpha Testing
Fixed function AlphaTest or it's programmable equivalent, clip(), has different performance characteristics on different platforms:
- Generally it's a small advantage to use it to cull out totally transparent pixels on most platforms.
- However, on PowerVR GPUs found in iOS and some Android devices, alpha testing is expensive. Do not try to use it as "performance optimization" there, it will be slower.
Color Mask
On some platforms (mostly mobile GPUs found in iOS and Android devices), using ColorMask to leave out some channels (e.g. ColorMask RGB) can be expensive, so only use it if really necessary.
SL-ShaderReplacement
Some rendering effects require rendering a scene with a different set of shaders. For example, good edge detection would need a texture with scene normals, so it could detect edges where surface orientations differ. Other effects might need a texture with scene depth, and so on. To achieve this, it is possible to render the scene with replaced shaders of all objects.
Shader replacement is done from scripting using Camera.RenderWithShader or Camera.SetReplacementShader functions. Both functions take a shader and a replacementTag.
It works like this: the camera renders the scene as it normally would. the objects still use their materials, but the actual shader that ends up being used is changed:
- If replacementTag is empty, then all objects in the scene are rendered with the given replacement shader.
- If replacementTag is not empty, then for each object that would be rendered:
- The real object's shader is queried for the tag value.
- If it does not have that tag, object is not rendered.
- A subshader is found in the replacement shader that has a given tag with the found value. If no such subshader is found, object is not rendered.
- Now that subshader is used to render the object.
So if all shaders would have, for example, a "RenderType" tag with values like "Opaque", "Transparent", "Background", "Overlay", you could write a replacement shader that only renders solid objects by using one subshader with RenderType=Solid tag. The other tag types would not be found in the replacement shader, so the objects would be not rendered. Or you could write several subshaders for different "RenderType" tag values. Incidentally, all built-in Unity shaders have a "RenderType" tag set.
Shader replacement tags in built-in Unity shaders
All built-in Unity shaders have a "RenderType" tag set that can be used when rendering with replaced shaders. Tag values are the following:
- Opaque: most of the shaders (Normal, Self Illuminated, Reflective, terrain shaders).
- Transparent: most semitransparent shaders (Transparent, Particle, Font, terrain additive pass shaders).
- TransparentCutout: masked transparency shaders (Transparent Cutout, two pass vegetation shaders).
- Background: Skybox shaders.
- Overlay: GUITexture, Halo, Flare shaders.
- TreeOpaque: terrain engine tree bark.
- TreeTransparentCutout: terrain engine tree leaves.
- TreeBillboard: terrain engine billboarded trees.
- Grass: terrain engine grass.
- GrassBillboard: terrain engine billboarded grass.
Built-in scene depth/normals texture
A Camera has a built-in capability to render depth or depth+normals texture, if you need that in some of your effects. See Camera Depth Texture page. Note that in some cases (depending on the hardware), the depth and depth+normals textures can internally be rendered using shader replacement. So it is important to have the correct "RenderType" tag in your shaders.
Page last updated: 2012-06-21SL-DepthTextures
It is possible to create Render Textures where each pixel contains a high precision "depth" value (see RenderTextureFormat.Depth). This is mostly used when some effects need scene's depth to be available (for example, soft particles, screen space ambient occlusion, translucency would all need scene's depth).
Pixel values in the depth texture range from 0 to 1 with a nonlinear distribution. Precision is usually 24 or 16 bits, depending on depth buffer used. When reading from depth texture, a high precision value in 0..1 range is returned. If you need to get distance from the camera, or otherwise linear value, you should compute that manually.
Depth textures in Unity are implemented differently on different platforms.
- On Direct3D 9 (Windows), depth texture is either a native depth buffer, or a single channel 32 bit floating point texture ("R32F" Direct3D format).
- Graphics card must support either native depth buffer (INTZ format) or floating point render textures in order for them to work.
- When rendering into the depth texture, fragment program must output the value needed.
- When reading from depth texture, red component of the color contains the high precision value.
- On OpenGL (Mac OS X), depth texture is the native OpenGL depth buffer (see ARB_depth_texture).
- Graphics card must support OpenGL 1.4 or ARB_depth_texture extension.
- Depth texture corresponds to Z buffer contents that are rendered, it does not use the result from the fragment program.
- OpenGL ES 2.0 (iOS/Android) is very much like OpenGL above.
- GPU must support GL_OES_depth_texture extension.
- Direct3D 11 (Windows) has native depth texture capability just like OpenGL.
- Flash (Stage3D) uses a color-encoded depth texture to emulate the high precision required for it.
Using depth texture helper macros
Most of the time depth textures are used to render depth from the camera. UnityCG.cginc include file contains some macros to deal with the above complexity in this case:
- UNITY_TRANSFER_DEPTH(o): computes eye space depth of the vertex and outputs it in o (which must be a float2). Use it in a vertex program when rendering into a depth texture. On platforms with native depth textures this macro does nothing at all, because Z buffer value is rendered implicitly.
- UNITY_OUTPUT_DEPTH(i): returns eye space depth from i (which must be a float2). Use it in a fragment program when rendering into a depth texture. On platforms with native depth textures this macro always returns zero, because Z buffer value is rendered implicitly.
- COMPUTE_EYEDEPTH(i): computes eye space depth of the vertex and outputs it in o. Use it in a vertex program when not rendering into a depth texture.
- DECODE_EYEDEPTH(i): given high precision value from depth texture i, returns corresponding eye space depth. This macro just returns i*FarPlane on Direct3D. On platforms with native depth textures it linearizes and expands the value to match camera's range.
For example, this shader would render depth of its objects:
Shader "Render Depth" {
SubShader {
Tags { "RenderType"="Opaque" }
Pass {
Fog { Mode Off }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f {
float4 pos : SV_POSITION;
float2 depth : TEXCOORD0;
};
v2f vert (appdata_base v) {
v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
UNITY_TRANSFER_DEPTH(o.depth);
return o;
}
half4 frag(v2f i) : COLOR {
UNITY_OUTPUT_DEPTH(i.depth);
}
ENDCG
}
}
}
Page last updated: 2012-09-04
SL-CameraDepthTexture
In Unity a Camera can generate a depth or depth+normals texture. This is a minimalistic G-buffer texture that can be used for post-processing effects or to implement custom lighting models (e.g. light pre-pass). Camera actually builds the depth texture using Shader Replacement feature, so it's entirely possible to do that yourself, in case you need a different G-buffer setup.
Camera's depth texture can be turned on using Camera.depthTextureMode variable from script.
There are two possible depth texture modes:
- DepthTextureMode.Depth: a depth texture.
- DepthTextureMode.DepthNormals: depth and view space normals packed into one texture.
DepthTextureMode.Depth texture
This builds a screen-sized depth texture.
DepthTextureMode.DepthNormals texture
This builds a screen-sized 32 bit (8 bit/channel) texture, where view space normals are encoded into R&G channels, and depth is encoded in B&A channels. Normals are encoded using Stereographic projection, and depth is 16 bit value packed into two 8 bit channels.
UnityCG.cginc include file has a helper function DecodeDepthNormal to decode depth and normal from the encoded pixel value. Returned depth is in 0..1 range.
For examples on how to use the depth and normals texture, please refer to the EdgeDetection image effect in the Shader Replacement example project or SSAO Image Effect.
Tips & Tricks
When implementing complex shaders or Image Effects, keep Rendering Differences Between Platforms in mind. In particular, using depth texture in an Image Effect often needs special handling on Direct3D + Anti-Aliasing.
In some cases, the depth texture might come directly from the native Z buffer. If you see artifacts in your depth texture, make sure that the shaders that use it do not write into the Z buffer (use ZWrite Off).
Under the hood
Depth texture can come directly from the actual depth buffer, or be rendered in a separate pass, depending on the rendering path used and the hardware. When the depth texture is rendered in a separate pass, this is done through Shader Replacement. Hence it is important to have correct "RenderType" tag in your shaders.
Page last updated: 2012-09-04SL-PlatformDifferences
Unity runs on various platforms, and in some cases there are differences in how things behave. Most of the time Unity hides the differences from you, but sometimes you can still bump into them.
Render Texture Coordinates
Vertical texture coordinate conventions differ between Direct3D, OpenGL and OpenGL ES:
- In Direct3D, the coordinate is zero at the top, and increases downwards.
- In OpenGL and OpenGL ES, the coordiante is zero at the bottom, and increases upwards.
Most of the time this does not really matter, except when rendering into a Render Texture. In that case, Unity internally flips rendering upside down when rendering into a texture on Direct3D, so that the conventions match between the platforms.
One case where this does not happen, is when Image Effects and Anti-Aliasing is used. In this case, Unity renders to screen to get anti-aliasing, and then "resolves" rendering into a RenderTexture for further processing with an Image Effect. The resulting source texture for an image effect is not flipped upside down on Direct3D (unlike all other Render Textures).
If your Image Effect is a simple one (processes one texture at a time), this does not really matter, because Graphics.Blit takes care of that.
However, if you're processing more than one RenderTexture together in your Image Effect, most likely they will come out at different vertical orientations (only in Direct3D-like platforms, and only when anti-aliasing is used). You need to manually "flip" the screen texture upside down in your vertex shader, like this:
// On D3D when AA is used, the main texture & scene depth texture
// will come out in different vertical orientations.
// So flip sampling of the texture when that is the case (main texture
// texel size will have negative Y).
#if UNITY_UV_STARTS_AT_TOP
if (_MainTex_TexelSize.y < 0)
uv.y = 1-uv.y;
#endif
Check out Edge Detection scene in Shader Replacement sample project for an example of this. Edge detection there uses both screen texture and Camera's Depth+Normals texture.
AlphaTest and programmable shaders
Some platforms, most notably mobile (OpenGL ES 2.0) and Direct3D 11, do not have fixed function alpha testing functionality. When you are using programmable shaders, it's advised to use Cg/HLSL clip() function in the pixel shader instead.
Direct3D 11 shader compiler is more picky about syntax
Direct3D 9 and OpenGL use NVIDIA's Cg to compile shaders, but Direct3D 11 (and Xbox 360) use Microsoft's HLSL shader compiler. HLSL compiler is more picky about various subtle shader errors. For example, it won't accept function output values that aren't initialized properly.
Most common places where you'd run into this:
- Surface shader vertex modifier that has an "out" parameter. Make sure to initialize the output like this:
void vert (inout appdata_full v, out Input o)
{
UNITY_INITIALIZE_OUTPUT(Input,o);
// ...
}
- Partially initialized values, e.g. a function returning float4, but the code only sets .xyz values of it. Make sure to set all values, or change to float3 if you only need those.
Using OpenGL Shading Language (GLSL) shaders with OpenGL ES 2.0
OpenGL ES 2.0 provides only limited native support for OpenGL Shading Language (GLSL), for instance OpenGL ES 2.0 layer provides no built-in parameters to the shader.
Unity implements built-in parameters for you exactly in the same way as OpenGL does, however following built-in parameters are missing:
- gl_ClipVertex
- gl_SecondaryColor
- gl_DepthRange
- halfVector property of the gl_LightSourceParameters structure
- gl_FrontFacing
- gl_FrontLightModelProduct
- gl_BackLightModelProduct
- gl_BackMaterial
- gl_Point
- gl_PointSize
- gl_ClipPlane
- gl_EyePlaneR, gl_EyePlaneS, gl_EyePlaneT, gl_EyePlaneQ
- gl_ObjectPlaneR, gl_ObjectPlaneS, gl_ObjectPlaneT, gl_ObjectPlaneQ
- gl_Fog
iPad2 and MSAA and alpha-blended geometry
There is a bug in apple driver resulting in artifacts when MSAA is enabled and alpha-blended geometry is drawn with non RGBA colorMask. To prevent artifacts we force RGBA colorMask when this configuration is encountered, though it will render built-in Glow FX unusable (as it needs DST_ALPHA for intensity value). Also, please update your shaders if you wrote them yourself (see "Render Setup -> ColorMask" in Pass Docs).
Page last updated: 2012-11-26SL-ShaderLOD
Shader Level of Detail (LOD) works by only using shaders or subshaders that have their LOD value less than a given number.
By default, allowed LOD level is infinite, that is, all shaders that are supported by the user's hardware can be used. However, in some cases you might want to drop shader details, even if the hardware can support them. For example, some cheap graphics cards might support all the features, but are too slow to use them. So you may want to not use parallax normal mapping on them.
Shader LOD can be either set per individual shader (using Shader.maximumLOD), or globally for all shaders (using Shader.globalMaximumLOD).
In your custom shaders, use LOD command to set up LOD value for any subshader.
Built-in shaders in Unity have their LODs set up this way:
- VertexLit kind of shaders = 100
- Decal, Reflective VertexLit = 150
- Diffuse = 200
- Diffuse Detail, Reflective Bumped Unlit, Reflective Bumped VertexLit = 250
- Bumped, Specular = 300
- Bumped Specular = 400
- Parallax = 500
- Parallax Specular = 600
SL-BuiltinValues
Unity provides a handful of builtin values for your shaders: things like current object's transformation matrices, time etc.
You just use them in ShaderLab like you'd use any other property, the only difference is that you don't have to declare it somewhere - they are "built in".
Using them in programmable shaders requires including UnityCG.cginc file.
Transformations
- float4x4 UNITY_MATRIX_MVP
- Current model*view*projection matrix
- float4x4 UNITY_MATRIX_MV
- Current model*view matrix
- float4x4 UNITY_MATRIX_P
- Current projection matrix
- float4x4 UNITY_MATRIX_T_MV
- Transpose of model*view matrix
- float4x4 UNITY_MATRIX_IT_MV
- Inverse transpose of model*view matrix
- float4x4 UNITY_MATRIX_TEXTURE0 to UNITY_MATRIX_TEXTURE3
- Texture transformation matrices
- float4x4 _Object2World
- Current model matrix
- float4x4 _World2Object
- Inverse of current world matrix
- float3 _WorldSpaceCameraPos
- World space position of the camera
- float4 unity_Scale
-
xyzcomponents unused;.wcontains scale for uniformly scaled objects.
Lighting
In plain ShaderLab, you access the following properties by appending zero at the end: e.g. the light's model*light color is _ModelLightColor0. In Cg shaders, they are exposed as arrays with a single element, so the same in Cg is _ModelLightColor[0].
| Name | Type | Value |
| _ModelLightColor | float4 | Material's Main * Light color |
| _SpecularLightColor | float4 | Material's Specular * Light color |
| _ObjectSpaceLightPos | float4 | Light's position in object space. w component is 0 for directional lights, 1 for other lights |
| _Light2World | float4x4 | Light to World space matrix |
| _World2Light | float4x4 | World to Light space matrix |
| _Object2Light | float4x4 | Object to Light space matrix |
Various
- float4 _Time : Time (t/20, t, t*2, t*3), use to animate things inside the shaders
- float4 _SinTime : Sine of time: (t/8, t/4, t/2, t)
- float4 _CosTime : Cosine of time: (t/8, t/4, t/2, t)
- float4 _ProjectionParams :
xis 1.0 or -1.0, negative if currently rendering with a flipped projection matrix
yis camera's near plane
zis camera's far plane
wis 1/FarPlane. - float4 _ScreenParams :
xis current render target width in pixels
yis current render target height in pixels
zis 1.0 + 1.0/width
wis 1.0 + 1.0/height
Scripting Concepts
Page last updated: 2007-11-16Layers
Layers are most commonly used by Cameras to render only a part of the scene, and by Lights to illuminate only parts of the scene. But they can also used by raycasting to selectively ignore colliders or to create collisions.
Creating Layers
The first step is to create a new layer, which we can then assign to a GameObject. To create a new layer, open the Edit menu and select .
We create a new layer in one of the empty User Layers. We choose layer 8.

Assigning Layers
Now that you have created a new layer, you have to assign the layer to one of the game objects.

In the tag manager we assigned the Player layer to be in layer 8.
Drawing only a part of the scene with the camera's culling mask
Using the camera's culling mask, you can selectively render objects which are in one particular layer. To do this, select the camera that should selectively render objects.
Modify the culling mask by checking or unchecking layers in the culling mask property.

Casting Rays Selectively
Using layers you can cast rays and ignore colliders in specific layers. For example you might want to cast a ray only against the player layer and ignore all other colliders.
The Physics.Raycast function takes a bitmask, where each bit determines if a layer will be ignored or not. If all bits in the layerMask are on, we will collide against all colliders. If the layerMask = 0, we will never find any collisions with the ray.
// JavaScript example.
// bit shift the index of the layer to get a bit mask
var layerMask = 1 << 8;
// Does the ray intersect any objects which are in the player layer.
if (Physics.Raycast (transform.position, Vector3.forward, Mathf.Infinity, layerMask))
print ("The ray hit the player");
// C# example.
int layerMask = 1 << 8;
// Does the ray intersect any objects which are in the player layer.
if (Physics.Raycast(transform.position, Vector3.forward, Mathf.Infinity, layerMask))
Debug.Log("The ray hit the player");
In the real world you want to do the inverse of that however. We want to cast a ray against all colliders except those in the Player layer.
// JavaScript example.
function Update () {
// Bit shift the index of the layer (8) to get a bit mask
var layerMask = 1 << 8;
// This would cast rays only against colliders in layer 8.
// But instead we want to collide against everything except layer 8. The ~ operator does this, it inverts a bitmask.
layerMask = ~layerMask;
var hit : RaycastHit;
// Does the ray intersect any objects excluding the player layer
if (Physics.Raycast (transform.position, transform.TransformDirection (Vector3.forward), hit, Mathf.Infinity, layerMask)) {
Debug.DrawRay (transform.position, transform.TransformDirection (Vector3.forward) * hit.distance, Color.yellow);
print ("Did Hit");
} else {
Debug.DrawRay (transform.position, transform.TransformDirection (Vector3.forward) *1000, Color.white);
print ("Did not Hit");
}
}
// C# example.
void Update () {
// Bit shift the index of the layer (8) to get a bit mask
int layerMask = 1 << 8;
// This would cast rays only against colliders in layer 8.
// But instead we want to collide against everything except layer 8. The ~ operator does this, it inverts a bitmask.
layerMask = ~layerMask;
RaycastHit hit;
// Does the ray intersect any objects excluding the player layer
if (Physics.Raycast(transform.position, transform.TransformDirection (Vector3.forward), out hit, Mathf.Infinity, layerMask)) {
Debug.DrawRay(transform.position, transform.TransformDirection (Vector3.forward) * hit.distance, Color.yellow);
Debug.Log("Did Hit");
} else {
Debug.DrawRay(transform.position, transform.TransformDirection (Vector3.forward) *1000, Color.white);
Debug.Log("Did not Hit");
}
}
When you don't pass a layerMask to the Raycast function, it will only ignore colliders that use the IgnoreRaycast layer. This is the easiest way to ignore some colliders when casting a ray.
Page last updated: 2012-05-28Layer Based Collision detection
レイヤーベースの衝突検出
Unity 3.x では、レイヤー ベースの衝突検出を導入しています。これにより、Game Object を特定のレイヤーに連結された別の Game Object に衝突させることができます。

自身のレイヤーとして衝突するオブジェクト
上記の画像には、6 つの GameObject (3 つの平面と 3の立方体) と、右側にどのオブジェクトがどのレイヤーと衝突するかを記述しているCollision Matrixがあります。 例では、同じレイヤーに属する GameObject のみが衝突できるよう、Collision Matrixを設定しています。
レイヤーに基づいて衝突を検出するよう GameObject を設定
- GameObject が属するレイヤーを選択します。

- レイヤーへの GameObject の割り当てが終了するまで、各 GameObject に 1 を繰り返します。
- を選択して、物理特性設定パネルを開きます。
- レイヤーにチェックを入れ、衝突マトリクスで、どのレイヤーが他のレイヤーと相互作用するかを選択します。

Tags
A Tag is a word which you link to one or more GameObjects. For instance, you might define "Player" and "Enemy" Tags for player-controlled characters and non-player characters respectively; a "Collectable" Tag could be defined for items the player can collect in the Scene; and so on. Clearly, Tags are intended to identify GameObjects for scripting purposes. We can use them to write script code to find a GameObject by looking for any object that contains our desired Tag. This is achieved using the GameObject.FindWithTag() function.
For example:
// Instantiates respawnPrefab at the location
// of the game object with tag "Respawn"
var respawnPrefab : GameObject;
var respawn = GameObject.FindWithTag ("Respawn");
Instantiate (respawnPrefab, respawn.position, respawn.rotation);
This saves us having to manually add our GameObjects to a script's exposed properties using drag and drop -- a useful timesaver if the same script code is being used in a number of GameObjects. Another example is a Trigger Collider control script which needs to work out whether the player is interacting with an enemy, as opposed to, say, a random prop or collectable item. Tags make this kind of test easy.
Applying a Tag
The Inspector will show the Tag and Layer drop-down menus just below any GameObject's name. To apply a Tag to a GameObject, simply open the Tags drop-down and choose the Tag you require:

The GameObject will now be associated with this Tag.
Creating new Tags
To create a new Tag, click the "Add new tag..." option at the end of the drop-down menu. This will open up the Tag Manager in the Inspector. The Tag Manager is described here.
Layers appear similar to Tags, but are used to define how Unity should render GameObjects in the Scene. See the Layers page for more information.
Hints
- A GameObject can only have one Tag assigned to it.
- Unity includes some built-in Tags which do not appear in the Tag Manager:
- "Untagged"
- "Respawn"
- "Finish"
- "EditorOnly"
- "MainCamera"
- "Player"
- and "GameController".
- You can use any word you like as a Tag. (You can even use short phrases, but you may need to widen the Inspector to see the tag's full name.)
RigidbodySleeping
リジッドボディが休止すると (フロアに降り立つ箱)、寝始めます。 睡眠は、物理特性エンジンがこれらのリジッドボディの処理を停止できる最適化です。 このように、リジッドボディが通常動かないようにしている限りは、シーン内に大量のリジッドボディを置くことができます。
リジッドボディの睡眠は完全に自動で発生します。 リジッドボディが sleepAngularVelocity および sleepVelocity よりも遅くなると、寝始めます。 休止から数分後、睡眠に設定されます。 ボディが睡眠時、衝突検出やシミュレーションは実行されなくなります。 これにより、CPU サイクルを多く省けます。
リジッドボディは、以下の場合に自動的に起きます。
- 別のリジッドボディが寝ているリジッドボディに衝突した場合。
- ジョイントを通じて接続された別のリジッドボディが移動し始めた場合。
- リジッドボディのプロパティ編集時。
- adding forces 時。
リジッドボディを休止させたい場合は、そのプロパティを編集したり、睡眠モードに入ろうとしている際に力を追加しないでください。
リジッドボディを自動的に休止に入らせるために、 Rigidbody.sleepVelocity と Rigidbody.sleepAngularVelocity の 2 つの変数を調整できます。 これらの変数は、Physics Manager (Edit -> Project Settings -> Physics) で定義された sleepVelocity および sleepAngularVelocity 変数に初期化されます。
リジッドボディは、rigidbody.Sleep を使用しても強制的に睡眠させることができます。 これは、新しいレベルのロード時にリジッドボディを休止状態で開始する際に便利です。
キネマティック リジッドボディは、寝ているリジッドボディを起こします。 スタティック コライダは起こしません。 リジッドボディが寝ており、スタティック コライダ (リジッドボディを追加していないコライダ) をリジッドボディに移動させるか、リジッドボディの下から引っ張っても、寝ているリジッドボディは起きません。 キネマティック リジッドボディの上で休止しているキネマティック リジッドボディを通常のリジッドボディの下から移動させると、寝ているリジッドボディが起き、物理特性アップデートで再度正確に計算されます。 動かしたいスタティック コライダがたくさんあり、異なるオブジェクトをその上に正しく落としたい場合は、キネマティック リジッドボディ コライダを使用します。
キネマティック リジッドボディ - これらは、どこにも移動しないため、物理特性アップデート中に計算されません。 キネマティック リジッドボディの上で休止しているキネマティック リジッドボディを通常のリジッドボディの下から移動させると、寝ているリジッドボディが起き、物理特性アップデートで再度正確に計算されます。 なので、動かしたいスタティック コライダがたくさんあり、異なるオブジェクトをその上に正しく落としたい場合は、キネマティック リジッドボディ コライダを使用します。
Page last updated: 2012-11-14



















-0.jpg)
-1.jpg)
-2.jpg)
-3.jpg)
-4.jpg)











































































































